Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, an architecture diagram of a virtual scene development and presentation system according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the virtual scene development and presentation system includes a development device 110 and a virtual scene presentation device 120.
The development-side device 110 may be a computer device corresponding to a developer/operator of a virtual scene.
After the virtual scene is developed, data related to rendering of the virtual scene may be stored or updated in the virtual scene presentation device 120.
The virtual scene display device 120 is a computer device that runs an application program corresponding to a virtual scene. Wherein, when the virtual scene presenting apparatus 120 is a user terminal, the application program may be a client program; when the virtual scene representation device 120 is a server, the application may be a server/cloud.
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene.
For a three-dimensional virtual scene, in order to improve the better use experience, the user is generally allowed to adjust the viewing angle of the virtual scene in a larger range. However, in many virtual scenes (such as a self-walking chess game scene), the viewing angle of each user observing the virtual scene is usually concentrated in a small portion of the viewing angle, and the adjustment of the viewing angle is rarely performed. That is, in these virtual scenes, the view parameters used in the graphics rendering are often concentrated in one or two smaller spaces, for example, only less than 20% or even less than 1% of the full-scene overview view, and the view at other times is fixed at a certain position. In such a virtual scene, if the view angle parameters can be classified into one or a small number of sparse sets, the continuity of the visibility information brought by the small-range view angle can be used to pre-calculate the scene visible set corresponding to the view angle parameter set. Based on the theory, the embodiments of the application provide a scene model visible set corresponding to the pre-calculated view angle parameters, and when the view angle parameters meet the conditions during virtual scene display, rendering is submitted through the pre-calculated scene model visible set, so that vertex coloring of the shielded model is reduced, and the rendering efficiency is improved.
The scheme is divided into an offline part and an online part, wherein the offline part is responsible for pre-calculating the relevant information of the scene model visible set corresponding to the view angle parameters, and the online part is responsible for submitting and rendering according to the relevant information of the scene model visible set under specific view angle parameters in the virtual scene operation process.
Wherein, the offline part may be executed by the development-side device 110, and the part may include: acquiring a high-probability visual angle set corresponding to a virtual scene, wherein the high-probability visual angle set comprises camera visual angles of which the access probability in the virtual scene is greater than a probability threshold; determining visible model part indicating information based on the high-probability view angle set, wherein the visible model part indicating information is used for indicating the part of each scene model in the virtual scene which is not shielded under the high-probability view angle set; generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
The above-mentioned online portion may be performed by the virtual scene representation device 120, and may include: acquiring a target view angle, wherein the target view angle is a camera view angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles of which the access probability in the virtual scene is greater than a probability threshold value; responding to the target view angle belonging to the high-probability view angle set, and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the part of each scene model in the virtual scene which is not occluded under the high-probability view angle set; submitting rendering data of the model part indicated by the visibility information to a rendering component so as to render a scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data for the model portion indicated by the visibility information; and displaying the scene picture of the virtual scene.
In the above scheme, indication information of an unoccluded model part in a high probability view set in a virtual scene is generated in advance, and in a virtual scene rendering process, when a target view of a user is in the high probability view set, the unoccluded model part in the high probability view set is rendered, while the occluded model part does not need to be submitted for rendering, and correspondingly, the occluded model part does not need to be subjected to vertex coloring, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Referring to fig. 2, a flowchart of an information generating and screen displaying method in a virtual scene according to an exemplary embodiment of the present application is shown. The method may be performed by a computer device, which may be the development-side device 110 and the virtual scene representation device 120 in the system shown in fig. 1. As shown in fig. 2, the method may include the steps of:
step 201, a development end device acquires a high-probability view angle set corresponding to a virtual scene, where the high-probability view angle set includes camera view angles whose access probability in the virtual scene is greater than a probability threshold.
In this embodiment of the application, the high probability view angle sets corresponding to the virtual scene may include one or more high probability view angle sets, and no intersection exists between every two high probability view angle sets. For example, two sets of high probability perspectives without intersection may be included.
The high probability view set may include one or more camera views that are accessed with a high probability. Wherein, the one camera view is accessed, which means that in the running process of the virtual scene, the camera view for observing the virtual scene is set (may be a default setting of the system, or may be set according to a view adjustment operation of a user) as the camera view.
In one possible implementation, the high probability view set may be set manually by a developer/operator.
In another possible implementation manner, the high-probability view angle set may also be obtained by analyzing statistics by the development-side device.
For example, the development end device may obtain an operation record of the virtual scene, count the visited probability of each camera view angle corresponding to the virtual scene based on the operation record of the virtual scene, and add the camera view angle, of which the visited probability is higher than the probability threshold, to the high probability view angle set. At this time, the probability threshold may be preset in the development-side device by a developer.
The probability that a camera view is accessed may be a ratio between the number of times the camera view is accessed and the total number of times each camera view in the virtual scene is accessed.
After the development end device acquires the high-probability view angle set, visible model part indication information can be determined based on the high-probability view angle set, and the visible model part indication information is used for indicating the unoccluded model part of each scene model in the virtual scene under the high-probability view angle set. The process may refer to step 202, described below, and the description below of step 202.
Step 202, the development end equipment acquires a polygon visibility array of each scene model under the high-probability visual angle set, and the polygon visibility array is used as partial indication information of the visible model; the polygon visibility array is used for indicating whether the polygons in the scene models are visible under the high-probability view angle sets respectively.
In a three-dimensional virtual scene, a plurality of scene models, such as buildings, virtual characters, virtual terrains, and the like, can be included in one virtual scene. The scene model is composed of at least two polygons; for example, in a general case, an in-scene model may be composed of several triangles, and adjacent triangles have common edges. The triangles are connected through the edges to form the outer surface of the scene model together.
In the embodiment of the application, based on the principle that the scene model is composed of polygons, whether each polygon in the scene model is visible under the high-probability view angle set can be determined, so that invisible polygons can be removed under the high-probability view angle set, and the effect of extracting the part, shielded under the high-probability view angle set, of the scene model is achieved.
In a possible implementation manner, the polygon visibility array includes values corresponding to respective polygons in the scene model.
The process of obtaining the polygon visibility array of each scene model under the high-probability view set may be as follows:
acquiring a target polygon, wherein the target polygon is a polygon in a visible state under a first camera view angle in each polygon contained in a target scene model; the target scene model is a scene model which is occluded from the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and setting the value corresponding to the target polygon in the polygon visibility array as a specified value.
In this embodiment of the application, the development-side device may represent, by an array, whether each scene model in the virtual scene is visible under the high-probability view set, for example, the length of the array may be the number of polygons included in each scene model in the virtual scene, where each value represents whether a polygon is visible under the high-probability view set, for example, for a polygon visible under the high-probability view set, the value of the polygon in the array may be 1, and otherwise, the value is 0.
In one possible implementation, before obtaining the target polygon, the method further includes:
screening out a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models;
and determining a scene model which is shielded by the first type scene model in the second type scene model under the first camera view angle as the target scene model.
In a virtual scene, there are usually multiple scene models simultaneously, and in a single viewing angle, there may be some models of the multiple scene models that are occluded, while others are not. If the visibility detection is performed on the polygons in all the scene models in the virtual scene, a higher calculation amount is introduced, and the offline elimination efficiency is affected.
In this regard, in this embodiment of the present application, before acquiring a target polygon in a view angle of a target camera, it may be determined which scene models in a virtual scene are occluded by other scene models in the view angle of the target camera, and the occluded scene models are determined as the target scene models, and then when acquiring the target polygon, the step of acquiring the target polygon is performed only for the occluded scene models, while polygons in other non-occluded scene models are all considered to be visible in the view angle of the target camera, and polygons other than the target polygon in the occluded scene models may be considered to be invisible.
That is, in the polygon visibility array, the value corresponding to the target polygon in the occluded scene model and the value corresponding to each polygon in the non-occluded scene model may be set to 1, and the values corresponding to polygons other than the target polygon in the occluded scene model may be set to 0.
In a possible implementation manner, the process of obtaining the target polygon may be as follows:
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of each polygon based on the number of the vertices of the polygon;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of each polygon based on color values of each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
In the embodiment of the application, which polygons in the scene model are visible and which polygons are invisible can be determined in an offline rendering mode. For example, the developer device assigns different color values to vertices of each polygon in a scene model, and then performs vertex coloring on the scene model according to previously set rendering related parameters to obtain a vertex-colored image, wherein the vertices of visible polygons are mapped into the image, at this time, the developer device traverses the color values of each pixel point in the image, and then determines the vertices corresponding to the traversed color values as visible vertices, and then passes the visible vertices, so that visible polygons in the scene model can be obtained.
In practical application, the polygons in each scene model may be triangles, and there may be shared vertices between triangles, in this case, the program may unpack the shared vertices as the unshared vertices, and the unpacked vertices may have coinciding vertex positions but may have vertex colors corresponding to respective belonged triangles, and since the unpacking process of the shared vertices is to solve the visibility of the belonged triangles, after obtaining corresponding visibility information, the temporarily created model may be discarded, so that the topology structure of the finally displayed model may not be affected. Because the multisampling anti-aliasing function is closed when rendering is carried out, the situation that two colors are mixed cannot occur to the color of the same pixel point, and if the triangle is smaller than one pixel point, the triangle can be hidden. However, the rendering resolution in the pre-calculation process can be several times as high as that in the final rendering process, so that the setting of different colors for the common vertex does not affect the subsequent decoding vertex number.
In this embodiment of the present application, the developer device may reorganize the vertex array of the model according to the triangle number, and assign a vertex color determined by each triangle, where the encoding manner from the number index to the color is as follows:
wherein the color consists of three channels, red, green and blue, of values from 0 to 255. The corresponding decoding method capable of obtaining the color to the index number is as follows:
index(color)=colorblue×2562+colorgreen×256+colorred-1
the development end device can generate model visibility information corresponding to the high-probability visual angle set based on the polygon visibility array; wherein the model visibility information is used to indicate portions of the respective scene models in the virtual scene that are not occluded under the set of high probability perspectives. The process of generating model visibility information may refer to the description of step 203 to step 206 below.
Step 203, the development end equipment sorts the polygons of the scene models based on the polygon visibility array; the polygons visible under the high-probability view angle set are continuous in the sorted polygons of each scene model.
In the embodiment of the application, in order to improve the rendering and submitting efficiency of the sub-models corresponding to one high-probability visual angle set in the post-virtual scene display process, the development-side device may reorder the related data (including the polygon array and the vertex array) of the polygons of each scene model in the virtual scene according to the polygon visibility array, that is, arrange the related data of the polygons belonging to the same high-probability visual angle set together, so as to facilitate the rapid query of the rendering data to be submitted in the subsequent submission process.
Step 204, the development end equipment acquires the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is the result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains the index interval of the polygon in the unoccluded model portion.
In this embodiment of the application, in order to improve the query efficiency when rendering is subsequently submitted, the indexes of the polygons may be renumbered according to the sorting result in step 203, so as to query the polygon array corresponding to the high-probability view set in the subsequent step.
Step 205, the development end equipment obtains the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is the result of sequentially numbering the indexes of the vertexes in the sequenced polygons of the scene models; the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
Similar to the index of the polygon, in the embodiment of the present application, the indexes of the vertices of the polygon may be renumbered according to the sorting result in step 203, so as to query the vertex array corresponding to the high probability view set in the following.
And step 206, the development end equipment acquires the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability view set.
That is to say, in the embodiment of the application, in the offline stage, the development end device removes the occluded model part in the virtual scene with the polygon and the vertex of the polygon as the granularity, and leaves the model part that is not oscillated as the sub-model of the virtual scene corresponding to the high-probability view set.
In the scheme shown in the embodiment of the application, the development end equipment can provide a rendering engine tool plug-in, so that support is provided for the graphic rendering development process. The developer can set pre-calculation parameters (including the high-probability visual angle set) in the engine, open the scene and call a pre-calculation command, so that the pre-calculation stage of the model scene can be completed.
The setting interface of the pre-calculated parameters may include a camera setting part of each view angle parameter set, and may further include a set of possible transformation information of a camera in the view angle parameter set. In addition, in order to facilitate the determination of the unoccluded model part by the rendering mode subsequently, rendering related parameters, such as shaders used for drawing vertex colors in the pre-calculation process, rendering resolution sets, rendering precision multiplying power and the like, can be further set.
Fig. 3 is a schematic diagram of a pre-calculation parameter setting interface according to an embodiment of the present application. As shown in fig. 3, the camera mode a (camera Pattern a)31 is a camera setting part of the first view parameter set, and the transformation information a (transformations a)32 contains a possible transformation information set of the camera in the first view parameter set. Similarly, the camera mode b (camera Pattern b) and the transformation information b (transformations b) are related settings of the second view parameter set. A vertex Color Shader (Color Shader)33 is a Shader used to draw vertex colors in a pre-calculation process. The rendering resolution (Screen Size)34 is a rendering resolution set common to two view parameter sets. An Accuracy magnification (Accuracy Times)35 is used to set the Accuracy magnification at which the calculation is expected to be rendering.
After the setting is completed, a scene needing pre-calculation is opened, and fig. 4 is a schematic diagram of an engine menu bar according to an embodiment of the present application. As shown in fig. 4, the pre-computation command and the debug command may be invoked through the engine's menu bar 41.
After the debug window command is opened, the engine may display a debug window, and fig. 5 is a schematic diagram of a debug window according to an embodiment of the present application. As shown in fig. 5, a related pop-up window 51 may be displayed in the window, and the number of different scene models of the current virtual scene may be displayed in the related pop-up window 51.
Clicking different buttons in the editor can display the model in the scene according to different screening rules, so that a developer can check whether the obstruction and the occluded object in the current scene are screened correctly.
After the pre-computation is completed, the scene is run and the components performing the pre-computation can be found in the camera object. The context menu of the component may contain operations for occluded model partial culling of view parameter sets. The above components provide an interface on the code and may also provide a way for developers to trigger pre-computed parameters from other logic scripts.
Fig. 6 is a culling control interface according to an embodiment of the present application. As shown in fig. 6, there are "STA", "STB" and "RM" button controls in the context menu 61 corresponding to the "VRIBO control" component in the camera object, and the user can enable the culling of the scene in the a view, the culling in the B view and the no culling, respectively, by clicking the button controls. The component provides an interface on the code and also provides a way for developers to trigger sub-model settings from other logic scripts.
After the model visibility information corresponding to the high-probability perspective set is generated, the developer device may deploy the high-probability perspective set and the model visibility information corresponding to the high-probability perspective set to the virtual scene display device as a part of rendering data of the virtual scene or as associated data of the virtual scene.
Step 207, the virtual scene display device obtains a target view angle, wherein the target view angle is a camera view angle for observing the virtual scene; the virtual scene corresponds to a set of high probability perspectives, including camera perspectives in the set of high probability perspectives having an access probability greater than a probability threshold in the virtual scene.
In the embodiment of the application, the virtual scene display device can acquire the camera view angle of the virtual scene observed at the current moment in the process of displaying the virtual scene to obtain the target view angle.
Step 208, the virtual scene display device responds to that the target view belongs to the high-probability view set, and obtains model visibility information corresponding to the high-probability view set; the model visibility information is used to indicate portions of the respective scene models in the virtual scene that are unobstructed under the set of high probability perspectives.
The virtual scene display device may detect whether the target view belongs to a high probability view set, and if so, the processor (e.g., CPU) may submit rendering data of the submodel corresponding to the high probability view set to a rendering component (e.g., GPU) for rendering, so that the rendering component only needs to perform vertex coloring on a visible submodel, and does not need to perform vertex coloring on a complete scene model in the virtual scene. In this step, if the target view belongs to the high-probability view set, the virtual scene display device may obtain model visibility information corresponding to the high-probability view set.
The model visibility information may indicate a vertex index and a polygon index of a polygon corresponding to an unobstructed model portion in the high-probability view set, and may be used to query rendering data corresponding to the unobstructed model portion in the high-probability view set.
Step 209, the virtual scene display apparatus submits the rendering data of the model portion indicated by the visibility information to a rendering component, so as to render the scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data for the model portion indicated by the visibility information.
As can be seen from the above steps, the visibility information includes polygon visibility information of the unobstructed model portion and visibility information of polygon vertices of the unobstructed model portion. In the embodiment of the application, the virtual scene display device may read, through the polygon visible information, the polygon array corresponding to each polygon in the un-occluded model portion under the high-probability view set, read the vertex array of each polygon vertex in the un-occluded model portion, and submit the read polygon array and vertex array to the rendering component for rendering, so as to render the submodel under the high-probability view set.
In step 210, the virtual scene display apparatus renders a scene picture of the virtual scene based on the scene model of the virtual scene in response to the target view belonging to the high-probability view set.
In this embodiment of the application, if the target view does not belong to the high probability view set, the processor in the virtual scene display device may submit rendering data of each scene model in the virtual scene to the rendering component for rendering, so as to ensure normal display at a camera view corresponding to the non-high probability view set.
In step 211, the virtual scene display device displays a scene picture of the virtual scene.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
The scheme shown in the embodiment corresponding to fig. 2 can be divided into three aspects, which are respectively: the method comprises the steps of system design, system pre-calculation implementation process and system operation implementation process.
First, system design
When a scene is rendered, the full set of the camera view angle parameter e is Ue, and p (e) is the activity probability of the camera view angle parameter e during rendering. Then there are:
∫e∈Uep(e)=1
according to the frequent activity range of the view e, two disjoint sparse view sets, Se _1 and Se _2, are created, as far as possible covering high activity probabilities p (Se _1) and p (Se _ 2). This season Se0=Ue-Se1-Se2As a degeneration scheme when the culling condition is not satisfied, the activity probability is p (Se)0)=1-p(Se1)-p(Se2)。
For all triangle sets Ut, defining a subset thereof as St, defining a visible set at a certain view angle e as St (e) when view angle e is uncertain, visibility is also uncertain, we consider the visible set at this time as a full set Ut, and determine visible sets St _ e1 and St _ e2 of Se _1 and Se _2 by calculation in a pre-calculation stage, resulting in:
four sets St _1, St _2, St _3, St _4 are then created. Such that:
and rearranging the vertex array and the index array of the model according to the sequence of St _1, St _2, St _3 and St _4, namely submitting and rendering different subsets of the model under different view angle sets:
a vertex/triangle array visualization schematic of the model may be as shown in fig. 7.
In a modern Graphics rendering pipeline, when rendering is submitted to a GPU (Graphics Processing Unit), only a part of a triangle array may be designated to be drawn, and a sub-interval of a vertex array corresponding to the part of the triangle may also be designated, so as to implement triangle elimination and vertex elimination. In actual engineering implementation, the vertices can also be processed according to the idea of triangle reorganization, so that the program can simultaneously perform triangle elimination and vertex elimination when implementing Se _1 and Se _ 2. After the pre-calculation phase is completed, new model data is saved and information of the sub-models is derived.
During operation, the view set where the current view is located is inquired, the corresponding visible set is obtained, and the information of the sub-model is sent to the GPU, so that the scene triangles and the vertexes can be eliminated at extremely low consumption.
Second, system precomputation implementation process
A flow chart for a pre-calculation implementation of the system may be as shown in fig. 8. First, a program pre-calculation configuration is input (i.e., step S81), and then, the program acquires scene settings, obtains scene, sparse view, and pre-calculation-related item settings, and sets the scene (i.e., step S82). Then finding out an occlusion object and an occluded object in the scene, setting vertex colors for all vertexes according to the index sequence of the scene triangle, and closing other unrelated renderers; traversing a view set Se _1, rendering and calculating the visibility of St _ e 1; the view set Se _2 is traversed, and the visibility of St _ e2 is rendered and calculated (i.e., step S83). Then, each occluded model of the scene is reorganized (step S84), the visibility of the triangle is obtained, the model is reorganized according to the visibility of the triangle, the index of the triangle is re-associated with the vertex, the reorganized model file is output, and the sub-model information of St _ e1 and St _ e2 is calculated (step S85).
The detailed implementation scheme of the main steps in the process is as follows:
1) setting up scenes
The method comprises the following steps that a scene needs to be set, and an sheltering object and a sheltered object of the current scene are obtained according to rules, wherein the sheltering object is defined as a model capable of sheltering other models during real-time rendering, and can be screened into a model made of static and non-semitransparent materials; the occluded object is defined as a model which can be subjected to triangle elimination according to the visual angle in real-time rendering, and the occluded object can be screened by referring to the following conditions: static, non-translucent material, sufficient rejection value, etc.
Backing up the existing scene and model, closing other irrelevant rendering components, carrying out triangle numbering on the screened sheltering object and the sheltered object, reorganizing the vertex array of the model according to the triangle numbers, giving the determined vertex color to each triangle, and using the coding mode of the number index to the color as follows:
wherein the color consists of three channels, red, green and blue, of values from 0 to 255. The corresponding decoding method capable of obtaining the color to the index number is as follows:
index(color)=colorblue×2562+colorgreen×256+colorred-1
2) computing visibility of a set of perspectives
This step acquires visibility sets St _ e1 and St _ e2 in view parameter sets Se _1 and Se _2, respectively, inputs s view parameter set Se _1, outputs St _ e1, and a flowchart of output of triangle visibility sets thereof can be as shown in fig. 9, taking Se _1 as an example. Firstly, inputting a view parameter set Se _1 (i.e. step S91), setting maxIndex as the total number of triangles of an occluded object (i.e. step S92), then initializing a result array with the length of maxIndex, setting all the result arrays to 0(S93), then traversing all the view parameters of the view parameter set Se _1, judging whether all the view parameters of Se _1 are traversed (i.e. step S94), if all the view parameters of Se _1 are not traversed, setting the view parameters of a camera (i.e. step S95), then rendering the current scene by a shader for drawing vertex colors (i.e. step S96), then reading back the rendering result (i.e. step S97), judging whether each pixel of a completion frame buffer is traversed (i.e. step S98), if each pixel of the completion frame buffer is judged to be traversed, executing step S94, if each pixel of the completion frame buffer is not traversed, decoding the pixel color into a number index (i.e. step S99), then, the array result [ index ] is set to 1 (i.e., step S910), and then step S98 is continuously executed, if it is determined that all the view angle parameters of Se _1 are completely traversed while step S94 is executed, the corresponding triangle visible set St _ e1 is output (i.e., step S911).
In this process, the view angle parameters should include the rendering resolution of the camera, the viewport aspect ratio, the field of view angle, the camera position, and the amount of spatial rotation. It should be noted that before invoking rendering, it should be ensured that the multisampling antialiasing and high dynamic range functionality of the camera is turned off, the background color is set to black, and the rendering target is set in the rendering texture of a given resolution.
When the rendering result is read back, the current active rendering texture is set as the rendering texture used for rendering, and then the ReadPixel instruction is executed to read the data from the GPU end to the CPU end, so that a color array with the rendering resolution is obtained, and the triangle corresponding to the pixel point can be obtained through the decoding formula in the step 1.
3) Reorganizing a model of a shelter
This step is the key step in the present application, and after step 2, the program obtains two arrays St _ e1 and St _ e2 regarding the visibility of the scene triangle, the length of each array being MaxIndex. An information visualization obtained by combining two visible information about triangles with a triangle index array of a model of an original scene may be as shown in fig. 10, wherein the data portion is random data.
Taking model 1 as an example, the process of model recombination comprises the following steps:
s1, intercepting the current model triangle visibility St _ e1 and St _ e2 from the scene triangle visibility result
S2, model vertex visibility Sv _ e1 and Sv _ e2 are calculated according to the model triangle visibility
S3, recombining the vertex arrays
S4, reassembling the triangle array
S5, updating the vertex information corresponding to the triangle array to the new vertex index
S6, outputting model and sub-model information
Please refer to fig. 11, which illustrates a schematic diagram of vertex visibility output according to an embodiment of the present application. Since the triangle array stores the index of the model vertex in triplet, the vertex visibility Sv _ e1 can be obtained by St _ e1 through the method shown in fig. 11. Firstly, inputting a model and triangle visible information of the model (i.e., step S1101), then obtaining a vertex total number of vertexCount (i.e., step S1102), then setting all arrays St _ e1 with an initialization length of vertexCount to 0 (i.e., step S1103), then judging whether traversing of the triangle visible information array is completed (i.e., step S1104), if not, judging whether the triangle is visible (i.e., step S1105), if not, continuing to execute step S1104, if judging that the triangle is not visible, taking out vertex indexes a, b, c of the visible triangle (i.e., step S1106), then setting St _ e1[ a ], St _ e1[ b ] and St _ e1[ c ] to 1 (i.e., step S1107), and if judging that traversing of the triangle visible information array is completed, outputting a vertex visibility array Sv _ e1 (i.e., step S1108).
Taking triangle 2 as an example, in step 2, the program knows that the triangle is visible under Se _1 and not visible under Se _2, and then its corresponding vertex 4, vertex 5, and vertex 6 are also visible under Se _1 and not visible under Se _ 2. Through this process, a visual set diagram can be seen as shown in fig. 12. The vertex attributes of the vertex array represent the information of each vertex, and may include vertex coordinates, vertex texture coordinates, vertex normal, vertex tangent, etc. depending on the derived data of the model during three-dimensional modeling, the whole vertex attribute data is the actual content of the vertex array when connected.
In order to be able to cull vertex information under Se _1 and Se _2, the program needs to recombine the entire vertex attribute array according to the resulting Sv _ e1 and Sv _ e 2. The vertex data is recombined in the same way as the triangle array is recombined given in the system design. Solving for Sv _ e1 ^ Sv _ e2, setting the vertex array complete set as Ut, and solving for the following 4 temporary intervals:
and then rearranging the vertex arrays according to the sequence of Sv1, Sv2, Sv3 and Sv4 to obtain a new vertex array, wherein the schematic diagram of the new vertex array can be shown in FIG. 13. It can be seen that after recombination, the visibility of Sv _1, Sv _2, Sv _3, Sv _4 respectively are e1 visible, e2 invisible, e1 and e2 both visible, e1 invisible, e2 visible, e1 and e2 both invisible.
The indexes of the new vertex array include:
the following formulas for the vertex visible subintervals VBO _ e0, VBO _ e1, VBO _ e2 under Se _0, Se _1, Se _2 can be obtained. Where VBO is a binary value defined as { offset, count }, where offset is the offset in the vertex array and count is the number of vertices.
And storing the whole vertex array by using the new vertex array index sequence to replace the original vertex array, namely finishing the recombination of the vertex array.
The triangle array is then reordered in the same way, and the schematic diagram of the rearranged triangle array can be as shown in fig. 14.
The same can result in:
and the triangular visible subintervals IBO _ e0, IBO _ e1 and IBO _ e2 under Se _0, Se _1 and Se _2 are as follows:
the reorganization of the triangle array also needs to consider the last step, and since the vertex array is reorganized, the position of the vertex is changed, and remapping is needed. For example, the old vertex index of the first triangle in the new vertex index array is 4, 5, 6, and the corresponding three vertex attributes are E, F, G, but the three vertices are not 4, 5, 6 but 0, 1, 2 in the new vertex array. It is necessary to reverse the mapping by the transformation relation according to the vertex array. To obtain the correct indices 0, 1, 2, the vertex index after remapping can be shown in FIG. 15.
And after the mapping is finished, replacing the content of the triangle array with the new vertex index, storing the whole triangle array, and finishing the recombination of the triangle array.
Since the length of the model before recombination is the same as that of the model after recombination, and the visible sets Ut and Uv for Se _0, no additional data needs to be saved for Se _0, and for Se _1 and Se _2, the data needs to be saved are: IBO _ e1, VBO _ e1, IBO _ e2 and VBO _ e 2. The data to be stored is model visibility information corresponding to the high-probability view set in the embodiment shown in fig. 2.
And replacing the processed model (including the vertex array and the triangle array after rearrangement and numbering) with the original model, and storing the sub-model interval information (namely the model visibility information) in a renderer script of the scene to finish the pre-calculation process.
Third, realizing process in system operation
A schematic diagram of a virtual scene runtime implementation process may be shown in fig. 16. The program needs to acquire the current view parameter e (i.e., step S1601), and then according to the definition and classification of the sparse view set in the system design chapter, calculate the corresponding sparse view set (i.e., step S1602), determine whether it belongs to the Se _1 range (i.e., step S1603), set the visible set to St _ e1 (i.e., step S1604) if it belongs to the Se _1 range, determine whether it belongs to the Se _2 range (i.e., step S1605) if it does not belong to the Se _1 range, set the visible set to St _ e2 (i.e., step S1606) if it belongs to the Se _2 range, set the visible set to St _ e0 (i.e., step S1607), extract the corresponding VBO and IBO subinterval information, submit the set visible sets St _ e1, St _ e2, and St _ e0 to the GPU through the model type information, and render (i.e 1608).
The method provides a scene triangle and vertex rejection scheme under sparse visual angle parameters, changes the running consumption and the memory occupation amount which are as low as negligible into larger rejection rate, is suitable for graphic products with sparse visual angle parameters and higher scene complexity, and can run in different engines, equipment and platforms compatibly.
The application solves the problem that the existing scheme in the industry can not solve: in the existing scheme, the scheme of dynamically removing in the CPU stage takes a model as coarse granularity, so that the removing can not be carried out more finely; the static elimination scheme for the scene can reach fine granularity but the elimination rate is not high due to the large freedom degree of the visual angle space; the processes of dynamically removing the model with fine granularity all occur in the GPU stage, the consumption of GPU bandwidth and pixel shaders is still large, and ineffective optimization is brought in scenes with large number of vertexes. The dynamic rejection scheme can be executed at the CPU end, ultrahigh rejection fineness and rejection correctness are guaranteed, and extra performance and space consumption are brought.
In the scheme of the application, theoretically, the rejection yield of the system is as follows:
wherein | Ut | is a determined quantity, and two sparse view angle sets Se _1 and Se _2 which have high activity probability and high rejection rate per Se should be selected as much as possible to improve the rejection system yield. For example, in the process of rendering a double chess game, the camera frequently moves in the peripheral area of the view angle of a chess player, and the Se _1 and Se _2 are arranged in the two areas to bring a good elimination effect. In this type of scene, since the change frequency of the camera parameters is not high, the culling process can be optimized logically in the program of the scene, that is, the camera parameter view angle set does not need to be recalculated every frame, and the one-time setting can be performed when the view angle is changed.
In addition, in some shadow rendering scenes, one sparse view angle set can be set as a light source projection view angle, and a large improvement can be brought to a shadow rendering stage. And setting a proxy model of the shadow casting process during the rendering operation so that the shadow rendering result can be correctly represented on a picture.
As the visible set is determined in the pre-calculation process through rendering and reading back visibility, the rejection comprises three types, namely occlusion rejection, surface rejection and cone rejection, the rejection effect is considerable, the functions of CPU cone rejection, PreZ rejection and the like do not need to be started, and the system operation consumption can be further reduced.
The method and the device are tested through actual items and verified in double chess game rendering. The camera parameter sets are combined into a full scene, more than 99% of the parameters are concentrated on the peripheries of the viewing angles of the players and the players, Se _1 and Se _2 are respectively determined as the movable area ranges of the viewing angles of the players and the players, and the resolution of a plurality of devices is adopted as the rendering resolution for pre-calculation. The obtained comparison results are greatly improved in two scenes, and the test results are shown in the following table 1 (data includes partial dynamic objects which cannot be eliminated, and the like):
TABLE 1
For example, a scene preview image of a view of scene 2 before culling may be as shown in fig. 17. After the culling is completed, a scene preview when the Sv _ e1 and Sv _ e2 are previewed in the free view angle may be as shown in fig. 18. And the actual rendering result of the camera is not different before and after the elimination.
Fig. 19 is a block diagram of a screen presentation apparatus according to an exemplary embodiment of the present application, which may be used to perform all or part of the steps performed by the virtual scene presentation device in the method shown in fig. 2. As shown in fig. 19, the apparatus includes:
a view angle obtaining module 1901, configured to obtain a target view angle, where the target view angle is a camera view angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
a visibility information obtaining module 1902, configured to, in response to that the target view belongs to the high-probability view set, obtain model visibility information corresponding to the high-probability view set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
a rendering module 1903, configured to submit rendering data of the model portion indicated by the visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
a displaying module 1904, configured to display a scene picture of the virtual scene.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In one possible implementation, the polygon visibility information includes an index interval of a polygon in the unoccluded model portion;
the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
In one possible implementation, the apparatus further includes:
and the picture rendering module is used for rendering the scene picture of the virtual scene based on the scene model of the virtual scene in response to the target view angle belonging to the high-probability view angle set.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Fig. 20 is a block diagram of a screen generating apparatus according to an exemplary embodiment of the present application, which may be used to execute all or part of the steps executed by the development-side device in the method shown in fig. 2. As shown in fig. 20, the apparatus includes:
a view angle set obtaining module 2001, configured to obtain a high probability view angle set corresponding to a virtual scene, where the high probability view angle set includes camera view angles whose access probabilities in the virtual scene are greater than a probability threshold;
an indication information obtaining module 2002, configured to determine, based on the high-probability view angle set, visible model part indication information, where the visible model part indication information is used to indicate a part of each scene model in the virtual scene that is not occluded under the high-probability view angle set;
a visibility information generation module 2003, configured to generate model visibility information corresponding to the high probability view set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In a possible implementation manner, the indication information obtaining module includes:
the array obtaining submodule is used for obtaining a polygon visibility array of each scene model under the high-probability visual angle set and taking the polygon visibility array as partial indication information of the visible model; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set or not.
In a possible implementation manner, the polygon visibility array includes values corresponding to polygons in the scene models respectively;
the array acquisition submodule includes:
a polygon acquiring unit, configured to acquire a target polygon, where the target polygon is a polygon that is in a visible state at a first camera view angle, among polygons included in a target scene model; the target scene model is a scene model which is occluded under the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and the numerical value setting unit is used for setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In one possible implementation, the apparatus further includes:
the model screening unit is used for screening a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models before acquiring the target polygon;
and the target determining unit is used for determining a scene model which is shielded by the first type scene model under a first camera view angle in the second type scene models as the target scene model.
In a possible implementation manner, the polygon obtaining unit is configured to,
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of the respective polygons based on the numbers of the vertices of the respective polygons;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
acquiring visible vertexes in vertexes of each polygon based on color values on each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
In one possible implementation manner, the visibility information generating module includes:
the sorting submodule is used for sorting the polygons of the scene models based on the polygon visibility array; polygons visible under the high-probability visual angle set are continuous in the sequenced polygons of the scene models;
the first information acquisition submodule is used for acquiring the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains an index interval of a polygon in the unoccluded model part;
the second information acquisition submodule is used for acquiring the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertexes in the polygons of the sequenced scene models; the vertex visibility information includes index intervals for polygon vertices in the unoccluded model portion;
and the visibility information generation submodule is used for acquiring the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability visual angle set.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
FIG. 21 is a block diagram illustrating a computer device according to an example embodiment. The computer device may be implemented as a development-side device or a virtual scene display device in the system shown in fig. 1.
The computer apparatus 2100 includes a Central Processing Unit (CPU) 2101, a system Memory 2104 including a Random Access Memory (RAM) 2102 and a Read-Only Memory (ROM) 2103, and a system bus 2105 connecting the system Memory 2104 and the Central Processing Unit 2101. Optionally, the computer device 2100 also includes a basic input/output system 2106 to facilitate transfer of information between various devices within the computer, and a mass storage device 2107 for storing an operating system 2113, application programs 2114, and other program modules 2115.
The mass storage device 2107 is connected to the central processing unit 2101 through a mass storage controller (not shown) connected to the system bus 2105. The mass storage device 2107 and its associated computer-readable media provide non-volatile storage for the computer device 2100. That is, the mass storage device 2107 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, flash memory or other solid state storage technology, CD-ROM, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 2104 and mass storage device 2107 described above may be collectively referred to as memory.
The computer device 2100 may be connected to the internet or other network device through a network interface unit 2111 connected to the system bus 2105.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processor 2101 implements all or part of the steps performed by the development-side device in the method shown in fig. 2 by executing the one or more programs; alternatively, the central processor 2101 may implement all or part of the steps performed by the virtual scene representation apparatus in the method shown in fig. 2 by executing the one or more programs.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one computer program that is loaded and executed by a processor to implement the method according to the above-described embodiments of the present application.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method described in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the aspects disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a scope of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements/instrumentalities shown in the drawings and described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited by the appended claims.