[go: up one dir, main page]

CN113457161A - Picture display method, information generation method, device, equipment and storage medium - Google Patents

Picture display method, information generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113457161A
CN113457161A CN202110805394.8A CN202110805394A CN113457161A CN 113457161 A CN113457161 A CN 113457161A CN 202110805394 A CN202110805394 A CN 202110805394A CN 113457161 A CN113457161 A CN 113457161A
Authority
CN
China
Prior art keywords
model
scene
probability
polygon
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110805394.8A
Other languages
Chinese (zh)
Other versions
CN113457161B (en
Inventor
王钦佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110805394.8A priority Critical patent/CN113457161B/en
Publication of CN113457161A publication Critical patent/CN113457161A/en
Application granted granted Critical
Publication of CN113457161B publication Critical patent/CN113457161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a picture display method, an information generation device, equipment and a storage medium, and relates to the technical field of virtual scenes. The method comprises the following steps: acquiring a target visual angle; responding to the fact that the target view angle belongs to the high-probability view angle set, and obtaining model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the part of each scene model in the virtual scene which is not shielded under the high-probability view angle set; submitting rendering data of the model part indicated by the visibility information to a rendering component so as to render a scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data of the model portion indicated by the visibility information; and displaying a scene picture of the virtual scene. The scheme can improve the rendering efficiency of the virtual scene.

Description

Picture display method, information generation method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of virtual scenes, in particular to a picture display method, an information generation method, a device, equipment and a storage medium.
Background
The visual presentation of the virtual scene is usually realized by rendering objects in the virtual scene.
In a three-dimensional virtual scene, the scene objects are usually shielded from each other, and in order to reduce the workload of rendering and improve the rendering efficiency, in the process of rendering the three-dimensional virtual scene, a rendering component in the virtual scene display device first colors the vertexes of the scene objects in the virtual scene, and then according to the shielding condition between the scene objects, the subsequent rendering process is not executed for the shielded model part in each scene model.
However, in the above scheme, for the occluded model portion in each scene model, vertex coloring is still required, which affects rendering efficiency of the virtual scene.
Disclosure of Invention
The embodiment of the application provides a picture display method, an information generation method, a device, equipment and a storage medium, and can improve the rendering efficiency of a virtual scene. The technical scheme is as follows:
in one aspect, a method for displaying a picture is provided, the method comprising:
acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
responding to the target view angle belonging to the high-probability view angle set, and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
submitting rendering data of the model part indicated by the visibility information to a rendering component so as to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
and displaying a scene picture of the virtual scene.
In one aspect, an information generating method is provided, and the method includes:
acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles of which the access probability in the virtual scene is greater than a probability threshold;
determining visible model part indication information based on the high probability perspective set, wherein the visible model part indication information is used for indicating that each scene model in the virtual scene is not shielded in the high probability perspective set;
generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
In another aspect, there is provided a picture presentation apparatus, the apparatus including:
the visual angle acquisition module is used for acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
the visibility information acquisition module is used for responding to the fact that the target view angle belongs to the high-probability view angle set and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
a rendering module, configured to submit rendering data of the model portion indicated by the visibility information to a rendering component, so as to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
and the display module is used for displaying the scene picture of the virtual scene.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In one possible implementation, the polygon visibility information includes an index interval of a polygon in the unoccluded model portion;
the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
In one possible implementation, the apparatus further includes:
and the picture rendering module is used for rendering the scene picture of the virtual scene based on the scene model of the virtual scene in response to the target view angle belonging to the high-probability view angle set.
In another aspect, an information generating apparatus is provided, the apparatus including:
the system comprises a visual angle set acquisition module, a visual angle setting module and a visual angle setting module, wherein the visual angle set acquisition module is used for acquiring a high-probability visual angle set corresponding to a virtual scene, and the high-probability visual angle set comprises camera visual angles of which the access probability in the virtual scene is greater than a probability threshold;
an indication information obtaining module, configured to determine, based on the high-probability view set, visible model part indication information, where the visible model part indication information is used to indicate a part of each scene model in the virtual scene that is not occluded under the high-probability view set;
the visibility information generation module is used for generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In a possible implementation manner, the indication information obtaining module includes:
the array obtaining submodule is used for obtaining a polygon visibility array of each scene model under the high-probability visual angle set and taking the polygon visibility array as partial indication information of the visible model; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set or not.
In a possible implementation manner, the polygon visibility array includes values corresponding to polygons in the scene models respectively;
the array acquisition submodule includes:
a polygon acquiring unit, configured to acquire a target polygon, where the target polygon is a polygon that is in a visible state at a first camera view angle, among polygons included in a target scene model; the target scene model is a scene model which is occluded under the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and the numerical value setting unit is used for setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In one possible implementation, the apparatus further includes:
the model screening unit is used for screening a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models before acquiring the target polygon;
and the target determining unit is used for determining a scene model which is shielded by the first type scene model under a first camera view angle in the second type scene models as the target scene model.
In a possible implementation manner, the polygon obtaining unit is configured to,
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of the respective polygons based on the numbers of the vertices of the respective polygons;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
acquiring visible vertexes in vertexes of each polygon based on color values on each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
In one possible implementation manner, the visibility information generating module includes:
the sorting submodule is used for sorting the polygons of the scene models based on the polygon visibility array; polygons visible under the high-probability visual angle set are continuous in the sequenced polygons of the scene models;
the first information acquisition submodule is used for acquiring the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains an index interval of a polygon in the unoccluded model part;
the second information acquisition submodule is used for acquiring the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertexes in the polygons of the sequenced scene models; the vertex visibility information includes index intervals for polygon vertices in the unoccluded model portion;
and the visibility information generation submodule is used for acquiring the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability visual angle set.
In another aspect, the present application provides a computer device, which includes a processor and a memory, where at least one computer program is stored in the memory, and the at least one computer program is loaded by the processor and executed to implement the method according to the above aspect.
In another aspect, the present application provides a computer-readable storage medium, in which at least one computer program is stored, and the at least one computer program is loaded and executed by a processor to implement the method according to the above aspect.
In another aspect, embodiments of the present application provide a computer program product or a computer program, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of the above aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
in the virtual scene rendering process, when a target visual angle of a user is in the high-probability visual angle set, the unoccluded model part in the high-probability visual angle set is rendered, the occluded model part does not need to be submitted for rendering, and correspondingly, the occluded model part does not need to be subjected to vertex coloring, so that the vertex coloring step in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is an architecture diagram of a virtual scene development and presentation system provided by an exemplary embodiment of the present application;
FIG. 2 is a flowchart of a method for generating information and displaying frames in a virtual scene according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of a pre-calculated parameter setting interface according to the embodiment of FIG. 2;
FIG. 4 is a diagram of an engine menu bar according to the embodiment of FIG. 2;
FIG. 5 is a diagram of a debug window according to the embodiment shown in FIG. 2;
FIG. 6 is a culling control interface according to the embodiment of FIG. 2;
FIG. 7 is a schematic diagram of a vertex/triangle array visualization of the model to which the embodiment shown in FIG. 2 relates;
FIG. 8 is a flow diagram of a pre-calculation implementation of the system to which the embodiment shown in FIG. 2 relates;
FIG. 9 is a flow chart of the output of the visible set of triangles involved in the embodiment shown in FIG. 2;
FIG. 10 is a diagram of a visualization of information to which the embodiment shown in FIG. 2 relates;
FIG. 11 is a schematic diagram of a vertex visibility output according to the embodiment of FIG. 2;
FIG. 12 is a visible set diagram of a visualization to which the embodiment of FIG. 2 relates;
FIG. 13 is a diagram of a new vertex array involved in the embodiment of FIG. 2;
FIG. 14 is a schematic diagram of a rearranged triangle array according to the embodiment shown in FIG. 2;
FIG. 15 is a diagram illustrating remapped vertex indices in accordance with the embodiment of FIG. 2;
FIG. 16 is a schematic diagram of a virtual scene runtime implementation process according to the embodiment shown in FIG. 2;
FIG. 17 is a preview of the scene before culling according to the embodiment shown in FIG. 2;
FIG. 18 is a preview of the scene after culling according to the embodiment shown in FIG. 2;
FIG. 19 is a block diagram of a display device according to an embodiment of the present application;
fig. 20 is a block diagram of an information generating apparatus according to an embodiment of the present application;
fig. 21 is a block diagram of a computer device according to another embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It is to be understood that reference herein to "a number" means one or more and "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Referring to fig. 1, an architecture diagram of a virtual scene development and presentation system according to an exemplary embodiment of the present application is shown. As shown in fig. 1, the virtual scene development and presentation system includes a development device 110 and a virtual scene presentation device 120.
The development-side device 110 may be a computer device corresponding to a developer/operator of a virtual scene.
After the virtual scene is developed, data related to rendering of the virtual scene may be stored or updated in the virtual scene presentation device 120.
The virtual scene display device 120 is a computer device that runs an application program corresponding to a virtual scene. Wherein, when the virtual scene presenting apparatus 120 is a user terminal, the application program may be a client program; when the virtual scene representation device 120 is a server, the application may be a server/cloud.
The virtual scene refers to a virtual scene displayed (or provided) when an application program runs on a terminal. The virtual scene can be a simulation environment scene of a real world, can also be a semi-simulation semi-fictional three-dimensional environment scene, and can also be a pure fictional three-dimensional environment scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene, and a three-dimensional virtual scene, and the following embodiments are illustrated by way of example, but not limited thereto, in which the virtual scene is a three-dimensional virtual scene.
For a three-dimensional virtual scene, in order to improve the better use experience, the user is generally allowed to adjust the viewing angle of the virtual scene in a larger range. However, in many virtual scenes (such as a self-walking chess game scene), the viewing angle of each user observing the virtual scene is usually concentrated in a small portion of the viewing angle, and the adjustment of the viewing angle is rarely performed. That is, in these virtual scenes, the view parameters used in the graphics rendering are often concentrated in one or two smaller spaces, for example, only less than 20% or even less than 1% of the full-scene overview view, and the view at other times is fixed at a certain position. In such a virtual scene, if the view angle parameters can be classified into one or a small number of sparse sets, the continuity of the visibility information brought by the small-range view angle can be used to pre-calculate the scene visible set corresponding to the view angle parameter set. Based on the theory, the embodiments of the application provide a scene model visible set corresponding to the pre-calculated view angle parameters, and when the view angle parameters meet the conditions during virtual scene display, rendering is submitted through the pre-calculated scene model visible set, so that vertex coloring of the shielded model is reduced, and the rendering efficiency is improved.
The scheme is divided into an offline part and an online part, wherein the offline part is responsible for pre-calculating the relevant information of the scene model visible set corresponding to the view angle parameters, and the online part is responsible for submitting and rendering according to the relevant information of the scene model visible set under specific view angle parameters in the virtual scene operation process.
Wherein, the offline part may be executed by the development-side device 110, and the part may include: acquiring a high-probability visual angle set corresponding to a virtual scene, wherein the high-probability visual angle set comprises camera visual angles of which the access probability in the virtual scene is greater than a probability threshold; determining visible model part indicating information based on the high-probability view angle set, wherein the visible model part indicating information is used for indicating the part of each scene model in the virtual scene which is not shielded under the high-probability view angle set; generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
The above-mentioned online portion may be performed by the virtual scene representation device 120, and may include: acquiring a target view angle, wherein the target view angle is a camera view angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles of which the access probability in the virtual scene is greater than a probability threshold value; responding to the target view angle belonging to the high-probability view angle set, and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used for indicating the part of each scene model in the virtual scene which is not occluded under the high-probability view angle set; submitting rendering data of the model part indicated by the visibility information to a rendering component so as to render a scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data for the model portion indicated by the visibility information; and displaying the scene picture of the virtual scene.
In the above scheme, indication information of an unoccluded model part in a high probability view set in a virtual scene is generated in advance, and in a virtual scene rendering process, when a target view of a user is in the high probability view set, the unoccluded model part in the high probability view set is rendered, while the occluded model part does not need to be submitted for rendering, and correspondingly, the occluded model part does not need to be subjected to vertex coloring, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Referring to fig. 2, a flowchart of an information generating and screen displaying method in a virtual scene according to an exemplary embodiment of the present application is shown. The method may be performed by a computer device, which may be the development-side device 110 and the virtual scene representation device 120 in the system shown in fig. 1. As shown in fig. 2, the method may include the steps of:
step 201, a development end device acquires a high-probability view angle set corresponding to a virtual scene, where the high-probability view angle set includes camera view angles whose access probability in the virtual scene is greater than a probability threshold.
In this embodiment of the application, the high probability view angle sets corresponding to the virtual scene may include one or more high probability view angle sets, and no intersection exists between every two high probability view angle sets. For example, two sets of high probability perspectives without intersection may be included.
The high probability view set may include one or more camera views that are accessed with a high probability. Wherein, the one camera view is accessed, which means that in the running process of the virtual scene, the camera view for observing the virtual scene is set (may be a default setting of the system, or may be set according to a view adjustment operation of a user) as the camera view.
In one possible implementation, the high probability view set may be set manually by a developer/operator.
In another possible implementation manner, the high-probability view angle set may also be obtained by analyzing statistics by the development-side device.
For example, the development end device may obtain an operation record of the virtual scene, count the visited probability of each camera view angle corresponding to the virtual scene based on the operation record of the virtual scene, and add the camera view angle, of which the visited probability is higher than the probability threshold, to the high probability view angle set. At this time, the probability threshold may be preset in the development-side device by a developer.
The probability that a camera view is accessed may be a ratio between the number of times the camera view is accessed and the total number of times each camera view in the virtual scene is accessed.
After the development end device acquires the high-probability view angle set, visible model part indication information can be determined based on the high-probability view angle set, and the visible model part indication information is used for indicating the unoccluded model part of each scene model in the virtual scene under the high-probability view angle set. The process may refer to step 202, described below, and the description below of step 202.
Step 202, the development end equipment acquires a polygon visibility array of each scene model under the high-probability visual angle set, and the polygon visibility array is used as partial indication information of the visible model; the polygon visibility array is used for indicating whether the polygons in the scene models are visible under the high-probability view angle sets respectively.
In a three-dimensional virtual scene, a plurality of scene models, such as buildings, virtual characters, virtual terrains, and the like, can be included in one virtual scene. The scene model is composed of at least two polygons; for example, in a general case, an in-scene model may be composed of several triangles, and adjacent triangles have common edges. The triangles are connected through the edges to form the outer surface of the scene model together.
In the embodiment of the application, based on the principle that the scene model is composed of polygons, whether each polygon in the scene model is visible under the high-probability view angle set can be determined, so that invisible polygons can be removed under the high-probability view angle set, and the effect of extracting the part, shielded under the high-probability view angle set, of the scene model is achieved.
In a possible implementation manner, the polygon visibility array includes values corresponding to respective polygons in the scene model.
The process of obtaining the polygon visibility array of each scene model under the high-probability view set may be as follows:
acquiring a target polygon, wherein the target polygon is a polygon in a visible state under a first camera view angle in each polygon contained in a target scene model; the target scene model is a scene model which is occluded from the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and setting the value corresponding to the target polygon in the polygon visibility array as a specified value.
In this embodiment of the application, the development-side device may represent, by an array, whether each scene model in the virtual scene is visible under the high-probability view set, for example, the length of the array may be the number of polygons included in each scene model in the virtual scene, where each value represents whether a polygon is visible under the high-probability view set, for example, for a polygon visible under the high-probability view set, the value of the polygon in the array may be 1, and otherwise, the value is 0.
In one possible implementation, before obtaining the target polygon, the method further includes:
screening out a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models;
and determining a scene model which is shielded by the first type scene model in the second type scene model under the first camera view angle as the target scene model.
In a virtual scene, there are usually multiple scene models simultaneously, and in a single viewing angle, there may be some models of the multiple scene models that are occluded, while others are not. If the visibility detection is performed on the polygons in all the scene models in the virtual scene, a higher calculation amount is introduced, and the offline elimination efficiency is affected.
In this regard, in this embodiment of the present application, before acquiring a target polygon in a view angle of a target camera, it may be determined which scene models in a virtual scene are occluded by other scene models in the view angle of the target camera, and the occluded scene models are determined as the target scene models, and then when acquiring the target polygon, the step of acquiring the target polygon is performed only for the occluded scene models, while polygons in other non-occluded scene models are all considered to be visible in the view angle of the target camera, and polygons other than the target polygon in the occluded scene models may be considered to be invisible.
That is, in the polygon visibility array, the value corresponding to the target polygon in the occluded scene model and the value corresponding to each polygon in the non-occluded scene model may be set to 1, and the values corresponding to polygons other than the target polygon in the occluded scene model may be set to 0.
In a possible implementation manner, the process of obtaining the target polygon may be as follows:
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of each polygon based on the number of the vertices of the polygon;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
obtaining visible vertexes in vertexes of each polygon based on color values of each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
In the embodiment of the application, which polygons in the scene model are visible and which polygons are invisible can be determined in an offline rendering mode. For example, the developer device assigns different color values to vertices of each polygon in a scene model, and then performs vertex coloring on the scene model according to previously set rendering related parameters to obtain a vertex-colored image, wherein the vertices of visible polygons are mapped into the image, at this time, the developer device traverses the color values of each pixel point in the image, and then determines the vertices corresponding to the traversed color values as visible vertices, and then passes the visible vertices, so that visible polygons in the scene model can be obtained.
In practical application, the polygons in each scene model may be triangles, and there may be shared vertices between triangles, in this case, the program may unpack the shared vertices as the unshared vertices, and the unpacked vertices may have coinciding vertex positions but may have vertex colors corresponding to respective belonged triangles, and since the unpacking process of the shared vertices is to solve the visibility of the belonged triangles, after obtaining corresponding visibility information, the temporarily created model may be discarded, so that the topology structure of the finally displayed model may not be affected. Because the multisampling anti-aliasing function is closed when rendering is carried out, the situation that two colors are mixed cannot occur to the color of the same pixel point, and if the triangle is smaller than one pixel point, the triangle can be hidden. However, the rendering resolution in the pre-calculation process can be several times as high as that in the final rendering process, so that the setting of different colors for the common vertex does not affect the subsequent decoding vertex number.
In this embodiment of the present application, the developer device may reorganize the vertex array of the model according to the triangle number, and assign a vertex color determined by each triangle, where the encoding manner from the number index to the color is as follows:
Figure BDA0003166330170000121
wherein the color consists of three channels, red, green and blue, of values from 0 to 255. The corresponding decoding method capable of obtaining the color to the index number is as follows:
index(color)=colorblue×2562+colorgreen×256+colorred-1
the development end device can generate model visibility information corresponding to the high-probability visual angle set based on the polygon visibility array; wherein the model visibility information is used to indicate portions of the respective scene models in the virtual scene that are not occluded under the set of high probability perspectives. The process of generating model visibility information may refer to the description of step 203 to step 206 below.
Step 203, the development end equipment sorts the polygons of the scene models based on the polygon visibility array; the polygons visible under the high-probability view angle set are continuous in the sorted polygons of each scene model.
In the embodiment of the application, in order to improve the rendering and submitting efficiency of the sub-models corresponding to one high-probability visual angle set in the post-virtual scene display process, the development-side device may reorder the related data (including the polygon array and the vertex array) of the polygons of each scene model in the virtual scene according to the polygon visibility array, that is, arrange the related data of the polygons belonging to the same high-probability visual angle set together, so as to facilitate the rapid query of the rendering data to be submitted in the subsequent submission process.
Step 204, the development end equipment acquires the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is the result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains the index interval of the polygon in the unoccluded model portion.
In this embodiment of the application, in order to improve the query efficiency when rendering is subsequently submitted, the indexes of the polygons may be renumbered according to the sorting result in step 203, so as to query the polygon array corresponding to the high-probability view set in the subsequent step.
Step 205, the development end equipment obtains the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is the result of sequentially numbering the indexes of the vertexes in the sequenced polygons of the scene models; the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
Similar to the index of the polygon, in the embodiment of the present application, the indexes of the vertices of the polygon may be renumbered according to the sorting result in step 203, so as to query the vertex array corresponding to the high probability view set in the following.
And step 206, the development end equipment acquires the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability view set.
That is to say, in the embodiment of the application, in the offline stage, the development end device removes the occluded model part in the virtual scene with the polygon and the vertex of the polygon as the granularity, and leaves the model part that is not oscillated as the sub-model of the virtual scene corresponding to the high-probability view set.
In the scheme shown in the embodiment of the application, the development end equipment can provide a rendering engine tool plug-in, so that support is provided for the graphic rendering development process. The developer can set pre-calculation parameters (including the high-probability visual angle set) in the engine, open the scene and call a pre-calculation command, so that the pre-calculation stage of the model scene can be completed.
The setting interface of the pre-calculated parameters may include a camera setting part of each view angle parameter set, and may further include a set of possible transformation information of a camera in the view angle parameter set. In addition, in order to facilitate the determination of the unoccluded model part by the rendering mode subsequently, rendering related parameters, such as shaders used for drawing vertex colors in the pre-calculation process, rendering resolution sets, rendering precision multiplying power and the like, can be further set.
Fig. 3 is a schematic diagram of a pre-calculation parameter setting interface according to an embodiment of the present application. As shown in fig. 3, the camera mode a (camera Pattern a)31 is a camera setting part of the first view parameter set, and the transformation information a (transformations a)32 contains a possible transformation information set of the camera in the first view parameter set. Similarly, the camera mode b (camera Pattern b) and the transformation information b (transformations b) are related settings of the second view parameter set. A vertex Color Shader (Color Shader)33 is a Shader used to draw vertex colors in a pre-calculation process. The rendering resolution (Screen Size)34 is a rendering resolution set common to two view parameter sets. An Accuracy magnification (Accuracy Times)35 is used to set the Accuracy magnification at which the calculation is expected to be rendering.
After the setting is completed, a scene needing pre-calculation is opened, and fig. 4 is a schematic diagram of an engine menu bar according to an embodiment of the present application. As shown in fig. 4, the pre-computation command and the debug command may be invoked through the engine's menu bar 41.
After the debug window command is opened, the engine may display a debug window, and fig. 5 is a schematic diagram of a debug window according to an embodiment of the present application. As shown in fig. 5, a related pop-up window 51 may be displayed in the window, and the number of different scene models of the current virtual scene may be displayed in the related pop-up window 51.
Clicking different buttons in the editor can display the model in the scene according to different screening rules, so that a developer can check whether the obstruction and the occluded object in the current scene are screened correctly.
After the pre-computation is completed, the scene is run and the components performing the pre-computation can be found in the camera object. The context menu of the component may contain operations for occluded model partial culling of view parameter sets. The above components provide an interface on the code and may also provide a way for developers to trigger pre-computed parameters from other logic scripts.
Fig. 6 is a culling control interface according to an embodiment of the present application. As shown in fig. 6, there are "STA", "STB" and "RM" button controls in the context menu 61 corresponding to the "VRIBO control" component in the camera object, and the user can enable the culling of the scene in the a view, the culling in the B view and the no culling, respectively, by clicking the button controls. The component provides an interface on the code and also provides a way for developers to trigger sub-model settings from other logic scripts.
After the model visibility information corresponding to the high-probability perspective set is generated, the developer device may deploy the high-probability perspective set and the model visibility information corresponding to the high-probability perspective set to the virtual scene display device as a part of rendering data of the virtual scene or as associated data of the virtual scene.
Step 207, the virtual scene display device obtains a target view angle, wherein the target view angle is a camera view angle for observing the virtual scene; the virtual scene corresponds to a set of high probability perspectives, including camera perspectives in the set of high probability perspectives having an access probability greater than a probability threshold in the virtual scene.
In the embodiment of the application, the virtual scene display device can acquire the camera view angle of the virtual scene observed at the current moment in the process of displaying the virtual scene to obtain the target view angle.
Step 208, the virtual scene display device responds to that the target view belongs to the high-probability view set, and obtains model visibility information corresponding to the high-probability view set; the model visibility information is used to indicate portions of the respective scene models in the virtual scene that are unobstructed under the set of high probability perspectives.
The virtual scene display device may detect whether the target view belongs to a high probability view set, and if so, the processor (e.g., CPU) may submit rendering data of the submodel corresponding to the high probability view set to a rendering component (e.g., GPU) for rendering, so that the rendering component only needs to perform vertex coloring on a visible submodel, and does not need to perform vertex coloring on a complete scene model in the virtual scene. In this step, if the target view belongs to the high-probability view set, the virtual scene display device may obtain model visibility information corresponding to the high-probability view set.
The model visibility information may indicate a vertex index and a polygon index of a polygon corresponding to an unobstructed model portion in the high-probability view set, and may be used to query rendering data corresponding to the unobstructed model portion in the high-probability view set.
Step 209, the virtual scene display apparatus submits the rendering data of the model portion indicated by the visibility information to a rendering component, so as to render the scene picture of the virtual scene through the rendering component; the sub-model information includes rendering data for the model portion indicated by the visibility information.
As can be seen from the above steps, the visibility information includes polygon visibility information of the unobstructed model portion and visibility information of polygon vertices of the unobstructed model portion. In the embodiment of the application, the virtual scene display device may read, through the polygon visible information, the polygon array corresponding to each polygon in the un-occluded model portion under the high-probability view set, read the vertex array of each polygon vertex in the un-occluded model portion, and submit the read polygon array and vertex array to the rendering component for rendering, so as to render the submodel under the high-probability view set.
In step 210, the virtual scene display apparatus renders a scene picture of the virtual scene based on the scene model of the virtual scene in response to the target view belonging to the high-probability view set.
In this embodiment of the application, if the target view does not belong to the high probability view set, the processor in the virtual scene display device may submit rendering data of each scene model in the virtual scene to the rendering component for rendering, so as to ensure normal display at a camera view corresponding to the non-high probability view set.
In step 211, the virtual scene display device displays a scene picture of the virtual scene.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
The scheme shown in the embodiment corresponding to fig. 2 can be divided into three aspects, which are respectively: the method comprises the steps of system design, system pre-calculation implementation process and system operation implementation process.
First, system design
When a scene is rendered, the full set of the camera view angle parameter e is Ue, and p (e) is the activity probability of the camera view angle parameter e during rendering. Then there are:
e∈Uep(e)=1
according to the frequent activity range of the view e, two disjoint sparse view sets, Se _1 and Se _2, are created, as far as possible covering high activity probabilities p (Se _1) and p (Se _ 2). This season Se0=Ue-Se1-Se2As a degeneration scheme when the culling condition is not satisfied, the activity probability is p (Se)0)=1-p(Se1)-p(Se2)。
For all triangle sets Ut, defining a subset thereof as St, defining a visible set at a certain view angle e as St (e) when view angle e is uncertain, visibility is also uncertain, we consider the visible set at this time as a full set Ut, and determine visible sets St _ e1 and St _ e2 of Se _1 and Se _2 by calculation in a pre-calculation stage, resulting in:
Figure BDA0003166330170000161
four sets St _1, St _2, St _3, St _4 are then created. Such that:
Figure BDA0003166330170000162
and rearranging the vertex array and the index array of the model according to the sequence of St _1, St _2, St _3 and St _4, namely submitting and rendering different subsets of the model under different view angle sets:
Figure BDA0003166330170000163
a vertex/triangle array visualization schematic of the model may be as shown in fig. 7.
In a modern Graphics rendering pipeline, when rendering is submitted to a GPU (Graphics Processing Unit), only a part of a triangle array may be designated to be drawn, and a sub-interval of a vertex array corresponding to the part of the triangle may also be designated, so as to implement triangle elimination and vertex elimination. In actual engineering implementation, the vertices can also be processed according to the idea of triangle reorganization, so that the program can simultaneously perform triangle elimination and vertex elimination when implementing Se _1 and Se _ 2. After the pre-calculation phase is completed, new model data is saved and information of the sub-models is derived.
During operation, the view set where the current view is located is inquired, the corresponding visible set is obtained, and the information of the sub-model is sent to the GPU, so that the scene triangles and the vertexes can be eliminated at extremely low consumption.
Second, system precomputation implementation process
A flow chart for a pre-calculation implementation of the system may be as shown in fig. 8. First, a program pre-calculation configuration is input (i.e., step S81), and then, the program acquires scene settings, obtains scene, sparse view, and pre-calculation-related item settings, and sets the scene (i.e., step S82). Then finding out an occlusion object and an occluded object in the scene, setting vertex colors for all vertexes according to the index sequence of the scene triangle, and closing other unrelated renderers; traversing a view set Se _1, rendering and calculating the visibility of St _ e 1; the view set Se _2 is traversed, and the visibility of St _ e2 is rendered and calculated (i.e., step S83). Then, each occluded model of the scene is reorganized (step S84), the visibility of the triangle is obtained, the model is reorganized according to the visibility of the triangle, the index of the triangle is re-associated with the vertex, the reorganized model file is output, and the sub-model information of St _ e1 and St _ e2 is calculated (step S85).
The detailed implementation scheme of the main steps in the process is as follows:
1) setting up scenes
The method comprises the following steps that a scene needs to be set, and an sheltering object and a sheltered object of the current scene are obtained according to rules, wherein the sheltering object is defined as a model capable of sheltering other models during real-time rendering, and can be screened into a model made of static and non-semitransparent materials; the occluded object is defined as a model which can be subjected to triangle elimination according to the visual angle in real-time rendering, and the occluded object can be screened by referring to the following conditions: static, non-translucent material, sufficient rejection value, etc.
Backing up the existing scene and model, closing other irrelevant rendering components, carrying out triangle numbering on the screened sheltering object and the sheltered object, reorganizing the vertex array of the model according to the triangle numbers, giving the determined vertex color to each triangle, and using the coding mode of the number index to the color as follows:
Figure BDA0003166330170000171
wherein the color consists of three channels, red, green and blue, of values from 0 to 255. The corresponding decoding method capable of obtaining the color to the index number is as follows:
index(color)=colorblue×2562+colorgreen×256+colorred-1
2) computing visibility of a set of perspectives
This step acquires visibility sets St _ e1 and St _ e2 in view parameter sets Se _1 and Se _2, respectively, inputs s view parameter set Se _1, outputs St _ e1, and a flowchart of output of triangle visibility sets thereof can be as shown in fig. 9, taking Se _1 as an example. Firstly, inputting a view parameter set Se _1 (i.e. step S91), setting maxIndex as the total number of triangles of an occluded object (i.e. step S92), then initializing a result array with the length of maxIndex, setting all the result arrays to 0(S93), then traversing all the view parameters of the view parameter set Se _1, judging whether all the view parameters of Se _1 are traversed (i.e. step S94), if all the view parameters of Se _1 are not traversed, setting the view parameters of a camera (i.e. step S95), then rendering the current scene by a shader for drawing vertex colors (i.e. step S96), then reading back the rendering result (i.e. step S97), judging whether each pixel of a completion frame buffer is traversed (i.e. step S98), if each pixel of the completion frame buffer is judged to be traversed, executing step S94, if each pixel of the completion frame buffer is not traversed, decoding the pixel color into a number index (i.e. step S99), then, the array result [ index ] is set to 1 (i.e., step S910), and then step S98 is continuously executed, if it is determined that all the view angle parameters of Se _1 are completely traversed while step S94 is executed, the corresponding triangle visible set St _ e1 is output (i.e., step S911).
In this process, the view angle parameters should include the rendering resolution of the camera, the viewport aspect ratio, the field of view angle, the camera position, and the amount of spatial rotation. It should be noted that before invoking rendering, it should be ensured that the multisampling antialiasing and high dynamic range functionality of the camera is turned off, the background color is set to black, and the rendering target is set in the rendering texture of a given resolution.
When the rendering result is read back, the current active rendering texture is set as the rendering texture used for rendering, and then the ReadPixel instruction is executed to read the data from the GPU end to the CPU end, so that a color array with the rendering resolution is obtained, and the triangle corresponding to the pixel point can be obtained through the decoding formula in the step 1.
3) Reorganizing a model of a shelter
This step is the key step in the present application, and after step 2, the program obtains two arrays St _ e1 and St _ e2 regarding the visibility of the scene triangle, the length of each array being MaxIndex. An information visualization obtained by combining two visible information about triangles with a triangle index array of a model of an original scene may be as shown in fig. 10, wherein the data portion is random data.
Taking model 1 as an example, the process of model recombination comprises the following steps:
s1, intercepting the current model triangle visibility St _ e1 and St _ e2 from the scene triangle visibility result
S2, model vertex visibility Sv _ e1 and Sv _ e2 are calculated according to the model triangle visibility
S3, recombining the vertex arrays
S4, reassembling the triangle array
S5, updating the vertex information corresponding to the triangle array to the new vertex index
S6, outputting model and sub-model information
Please refer to fig. 11, which illustrates a schematic diagram of vertex visibility output according to an embodiment of the present application. Since the triangle array stores the index of the model vertex in triplet, the vertex visibility Sv _ e1 can be obtained by St _ e1 through the method shown in fig. 11. Firstly, inputting a model and triangle visible information of the model (i.e., step S1101), then obtaining a vertex total number of vertexCount (i.e., step S1102), then setting all arrays St _ e1 with an initialization length of vertexCount to 0 (i.e., step S1103), then judging whether traversing of the triangle visible information array is completed (i.e., step S1104), if not, judging whether the triangle is visible (i.e., step S1105), if not, continuing to execute step S1104, if judging that the triangle is not visible, taking out vertex indexes a, b, c of the visible triangle (i.e., step S1106), then setting St _ e1[ a ], St _ e1[ b ] and St _ e1[ c ] to 1 (i.e., step S1107), and if judging that traversing of the triangle visible information array is completed, outputting a vertex visibility array Sv _ e1 (i.e., step S1108).
Taking triangle 2 as an example, in step 2, the program knows that the triangle is visible under Se _1 and not visible under Se _2, and then its corresponding vertex 4, vertex 5, and vertex 6 are also visible under Se _1 and not visible under Se _ 2. Through this process, a visual set diagram can be seen as shown in fig. 12. The vertex attributes of the vertex array represent the information of each vertex, and may include vertex coordinates, vertex texture coordinates, vertex normal, vertex tangent, etc. depending on the derived data of the model during three-dimensional modeling, the whole vertex attribute data is the actual content of the vertex array when connected.
In order to be able to cull vertex information under Se _1 and Se _2, the program needs to recombine the entire vertex attribute array according to the resulting Sv _ e1 and Sv _ e 2. The vertex data is recombined in the same way as the triangle array is recombined given in the system design. Solving for Sv _ e1 ^ Sv _ e2, setting the vertex array complete set as Ut, and solving for the following 4 temporary intervals:
Figure BDA0003166330170000191
and then rearranging the vertex arrays according to the sequence of Sv1, Sv2, Sv3 and Sv4 to obtain a new vertex array, wherein the schematic diagram of the new vertex array can be shown in FIG. 13. It can be seen that after recombination, the visibility of Sv _1, Sv _2, Sv _3, Sv _4 respectively are e1 visible, e2 invisible, e1 and e2 both visible, e1 invisible, e2 visible, e1 and e2 both invisible.
The indexes of the new vertex array include:
Figure BDA0003166330170000201
the following formulas for the vertex visible subintervals VBO _ e0, VBO _ e1, VBO _ e2 under Se _0, Se _1, Se _2 can be obtained. Where VBO is a binary value defined as { offset, count }, where offset is the offset in the vertex array and count is the number of vertices.
Figure BDA0003166330170000202
And storing the whole vertex array by using the new vertex array index sequence to replace the original vertex array, namely finishing the recombination of the vertex array.
The triangle array is then reordered in the same way, and the schematic diagram of the rearranged triangle array can be as shown in fig. 14.
The same can result in:
Figure BDA0003166330170000203
and the triangular visible subintervals IBO _ e0, IBO _ e1 and IBO _ e2 under Se _0, Se _1 and Se _2 are as follows:
Figure BDA0003166330170000204
the reorganization of the triangle array also needs to consider the last step, and since the vertex array is reorganized, the position of the vertex is changed, and remapping is needed. For example, the old vertex index of the first triangle in the new vertex index array is 4, 5, 6, and the corresponding three vertex attributes are E, F, G, but the three vertices are not 4, 5, 6 but 0, 1, 2 in the new vertex array. It is necessary to reverse the mapping by the transformation relation according to the vertex array. To obtain the correct indices 0, 1, 2, the vertex index after remapping can be shown in FIG. 15.
And after the mapping is finished, replacing the content of the triangle array with the new vertex index, storing the whole triangle array, and finishing the recombination of the triangle array.
Since the length of the model before recombination is the same as that of the model after recombination, and the visible sets Ut and Uv for Se _0, no additional data needs to be saved for Se _0, and for Se _1 and Se _2, the data needs to be saved are: IBO _ e1, VBO _ e1, IBO _ e2 and VBO _ e 2. The data to be stored is model visibility information corresponding to the high-probability view set in the embodiment shown in fig. 2.
And replacing the processed model (including the vertex array and the triangle array after rearrangement and numbering) with the original model, and storing the sub-model interval information (namely the model visibility information) in a renderer script of the scene to finish the pre-calculation process.
Third, realizing process in system operation
A schematic diagram of a virtual scene runtime implementation process may be shown in fig. 16. The program needs to acquire the current view parameter e (i.e., step S1601), and then according to the definition and classification of the sparse view set in the system design chapter, calculate the corresponding sparse view set (i.e., step S1602), determine whether it belongs to the Se _1 range (i.e., step S1603), set the visible set to St _ e1 (i.e., step S1604) if it belongs to the Se _1 range, determine whether it belongs to the Se _2 range (i.e., step S1605) if it does not belong to the Se _1 range, set the visible set to St _ e2 (i.e., step S1606) if it belongs to the Se _2 range, set the visible set to St _ e0 (i.e., step S1607), extract the corresponding VBO and IBO subinterval information, submit the set visible sets St _ e1, St _ e2, and St _ e0 to the GPU through the model type information, and render (i.e 1608).
The method provides a scene triangle and vertex rejection scheme under sparse visual angle parameters, changes the running consumption and the memory occupation amount which are as low as negligible into larger rejection rate, is suitable for graphic products with sparse visual angle parameters and higher scene complexity, and can run in different engines, equipment and platforms compatibly.
The application solves the problem that the existing scheme in the industry can not solve: in the existing scheme, the scheme of dynamically removing in the CPU stage takes a model as coarse granularity, so that the removing can not be carried out more finely; the static elimination scheme for the scene can reach fine granularity but the elimination rate is not high due to the large freedom degree of the visual angle space; the processes of dynamically removing the model with fine granularity all occur in the GPU stage, the consumption of GPU bandwidth and pixel shaders is still large, and ineffective optimization is brought in scenes with large number of vertexes. The dynamic rejection scheme can be executed at the CPU end, ultrahigh rejection fineness and rejection correctness are guaranteed, and extra performance and space consumption are brought.
In the scheme of the application, theoretically, the rejection yield of the system is as follows:
Figure BDA0003166330170000211
wherein | Ut | is a determined quantity, and two sparse view angle sets Se _1 and Se _2 which have high activity probability and high rejection rate per Se should be selected as much as possible to improve the rejection system yield. For example, in the process of rendering a double chess game, the camera frequently moves in the peripheral area of the view angle of a chess player, and the Se _1 and Se _2 are arranged in the two areas to bring a good elimination effect. In this type of scene, since the change frequency of the camera parameters is not high, the culling process can be optimized logically in the program of the scene, that is, the camera parameter view angle set does not need to be recalculated every frame, and the one-time setting can be performed when the view angle is changed.
In addition, in some shadow rendering scenes, one sparse view angle set can be set as a light source projection view angle, and a large improvement can be brought to a shadow rendering stage. And setting a proxy model of the shadow casting process during the rendering operation so that the shadow rendering result can be correctly represented on a picture.
As the visible set is determined in the pre-calculation process through rendering and reading back visibility, the rejection comprises three types, namely occlusion rejection, surface rejection and cone rejection, the rejection effect is considerable, the functions of CPU cone rejection, PreZ rejection and the like do not need to be started, and the system operation consumption can be further reduced.
The method and the device are tested through actual items and verified in double chess game rendering. The camera parameter sets are combined into a full scene, more than 99% of the parameters are concentrated on the peripheries of the viewing angles of the players and the players, Se _1 and Se _2 are respectively determined as the movable area ranges of the viewing angles of the players and the players, and the resolution of a plurality of devices is adopted as the rendering resolution for pre-calculation. The obtained comparison results are greatly improved in two scenes, and the test results are shown in the following table 1 (data includes partial dynamic objects which cannot be eliminated, and the like):
TABLE 1
Figure BDA0003166330170000221
For example, a scene preview image of a view of scene 2 before culling may be as shown in fig. 17. After the culling is completed, a scene preview when the Sv _ e1 and Sv _ e2 are previewed in the free view angle may be as shown in fig. 18. And the actual rendering result of the camera is not different before and after the elimination.
Fig. 19 is a block diagram of a screen presentation apparatus according to an exemplary embodiment of the present application, which may be used to perform all or part of the steps performed by the virtual scene presentation device in the method shown in fig. 2. As shown in fig. 19, the apparatus includes:
a view angle obtaining module 1901, configured to obtain a target view angle, where the target view angle is a camera view angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
a visibility information obtaining module 1902, configured to, in response to that the target view belongs to the high-probability view set, obtain model visibility information corresponding to the high-probability view set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
a rendering module 1903, configured to submit rendering data of the model portion indicated by the visibility information to a rendering component to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
a displaying module 1904, configured to display a scene picture of the virtual scene.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In one possible implementation, the polygon visibility information includes an index interval of a polygon in the unoccluded model portion;
the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
In one possible implementation, the apparatus further includes:
and the picture rendering module is used for rendering the scene picture of the virtual scene based on the scene model of the virtual scene in response to the target view angle belonging to the high-probability view angle set.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
Fig. 20 is a block diagram of a screen generating apparatus according to an exemplary embodiment of the present application, which may be used to execute all or part of the steps executed by the development-side device in the method shown in fig. 2. As shown in fig. 20, the apparatus includes:
a view angle set obtaining module 2001, configured to obtain a high probability view angle set corresponding to a virtual scene, where the high probability view angle set includes camera view angles whose access probabilities in the virtual scene are greater than a probability threshold;
an indication information obtaining module 2002, configured to determine, based on the high-probability view angle set, visible model part indication information, where the visible model part indication information is used to indicate a part of each scene model in the virtual scene that is not occluded under the high-probability view angle set;
a visibility information generation module 2003, configured to generate model visibility information corresponding to the high probability view set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
In one possible implementation, the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
In a possible implementation manner, the indication information obtaining module includes:
the array obtaining submodule is used for obtaining a polygon visibility array of each scene model under the high-probability visual angle set and taking the polygon visibility array as partial indication information of the visible model; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set or not.
In a possible implementation manner, the polygon visibility array includes values corresponding to polygons in the scene models respectively;
the array acquisition submodule includes:
a polygon acquiring unit, configured to acquire a target polygon, where the target polygon is a polygon that is in a visible state at a first camera view angle, among polygons included in a target scene model; the target scene model is a scene model which is occluded under the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and the numerical value setting unit is used for setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
In one possible implementation, the apparatus further includes:
the model screening unit is used for screening a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models before acquiring the target polygon;
and the target determining unit is used for determining a scene model which is shielded by the first type scene model under a first camera view angle in the second type scene models as the target scene model.
In a possible implementation manner, the polygon obtaining unit is configured to,
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of the respective polygons based on the numbers of the vertices of the respective polygons;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
acquiring visible vertexes in vertexes of each polygon based on color values on each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
In one possible implementation manner, the visibility information generating module includes:
the sorting submodule is used for sorting the polygons of the scene models based on the polygon visibility array; polygons visible under the high-probability visual angle set are continuous in the sequenced polygons of the scene models;
the first information acquisition submodule is used for acquiring the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains an index interval of a polygon in the unoccluded model part;
the second information acquisition submodule is used for acquiring the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertexes in the polygons of the sequenced scene models; the vertex visibility information includes index intervals for polygon vertices in the unoccluded model portion;
and the visibility information generation submodule is used for acquiring the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability visual angle set.
To sum up, according to the scheme shown in the embodiment of the present application, in the virtual scene, indication information of an unoccluded model portion in the high probability view set is generated in advance, in the virtual scene rendering process, when the target view of the user is in the high probability view set, the unoccluded model portion in the high probability view set is rendered, and the occluded model portion does not need to be submitted for rendering, and accordingly, vertex coloring is not needed to be performed on the occluded model portion, so that vertex coloring steps in the rendering process can be reduced under most conditions, and the rendering efficiency of the virtual scene is improved.
FIG. 21 is a block diagram illustrating a computer device according to an example embodiment. The computer device may be implemented as a development-side device or a virtual scene display device in the system shown in fig. 1.
The computer apparatus 2100 includes a Central Processing Unit (CPU) 2101, a system Memory 2104 including a Random Access Memory (RAM) 2102 and a Read-Only Memory (ROM) 2103, and a system bus 2105 connecting the system Memory 2104 and the Central Processing Unit 2101. Optionally, the computer device 2100 also includes a basic input/output system 2106 to facilitate transfer of information between various devices within the computer, and a mass storage device 2107 for storing an operating system 2113, application programs 2114, and other program modules 2115.
The mass storage device 2107 is connected to the central processing unit 2101 through a mass storage controller (not shown) connected to the system bus 2105. The mass storage device 2107 and its associated computer-readable media provide non-volatile storage for the computer device 2100. That is, the mass storage device 2107 may include a computer-readable medium (not shown) such as a hard disk or Compact disk Read-Only Memory (CD-ROM) drive.
Without loss of generality, the computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, flash memory or other solid state storage technology, CD-ROM, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 2104 and mass storage device 2107 described above may be collectively referred to as memory.
The computer device 2100 may be connected to the internet or other network device through a network interface unit 2111 connected to the system bus 2105.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processor 2101 implements all or part of the steps performed by the development-side device in the method shown in fig. 2 by executing the one or more programs; alternatively, the central processor 2101 may implement all or part of the steps performed by the virtual scene representation apparatus in the method shown in fig. 2 by executing the one or more programs.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, which may be a computer readable storage medium contained in a memory of the above embodiments; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer readable storage medium has stored therein at least one computer program that is loaded and executed by a processor to implement the method according to the above-described embodiments of the present application.
Optionally, the computer-readable storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a Solid State Drive (SSD), or an optical disc. The Random Access Memory may include a resistive Random Access Memory (ReRAM) and a Dynamic Random Access Memory (DRAM). The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In an exemplary embodiment, a computer program product or computer program is also provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method described in the above embodiments.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the aspects disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a scope of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements/instrumentalities shown in the drawings and described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited by the appended claims.

Claims (15)

1. A method for displaying a picture, the method comprising:
acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
responding to the target view angle belonging to the high-probability view angle set, and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
submitting rendering data of the model part indicated by the visibility information to a rendering component so as to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
and displaying a scene picture of the virtual scene.
2. The method of claim 1, wherein the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
3. The method of claim 2,
the polygon visibility information contains an index interval of a polygon in the unoccluded model part;
the vertex visibility information includes index ranges for polygon vertices in the unoccluded model portion.
4. The method of claim 1, further comprising:
rendering a scene picture of the virtual scene based on a scene model of the virtual scene in response to the target perspective belonging to the set of high probability perspectives.
5. An information generating method, characterized in that the method comprises:
acquiring a high-probability view angle set corresponding to a virtual scene, wherein the high-probability view angle set comprises camera view angles of which the access probability in the virtual scene is greater than a probability threshold;
determining visible model part indication information based on the high probability perspective set, wherein the visible model part indication information is used for indicating that each scene model in the virtual scene is not shielded in the high probability perspective set;
generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
6. The method of claim 5, wherein the scene model is composed of at least two polygons; the visibility information includes polygon visibility information for the unoccluded model portion and visibility information for polygon vertices of the unoccluded model portion.
7. The method of claim 6, wherein determining, based on the set of high probability perspectives, a portion of a model of each scene model in the virtual scene that is not occluded under the set of high probability perspectives comprises:
acquiring a polygon visibility array of each scene model under the high-probability visual angle set as partial indication information of the visible model; the polygon visibility array is used for indicating whether polygons in each scene model are visible under the high-probability view angle set or not.
8. The method according to claim 7, wherein the polygon visibility array comprises values corresponding to polygons in each of the scene models;
the acquiring a polygon visibility array of each scene model under the high probability view set includes:
acquiring a target polygon, wherein the target polygon is a polygon in a visible state under a first camera view angle in each polygon included in a target scene model; the target scene model is a scene model which is occluded under the first camera view angle in each scene model; the first camera perspective is any one of the set of high probability perspectives;
and setting the numerical value corresponding to the target polygon in the polygon visibility array as a specified numerical value.
9. The method of claim 8, wherein prior to obtaining the target polygon, the method further comprises:
screening out a first type scene model meeting the shielding condition and a second type scene model meeting the shielded condition from the scene models;
and determining a scene model which is shielded by the first type scene model under the first camera view angle in the second type scene model as the target scene model.
10. The method of claim 8, wherein the obtaining the target polygon comprises:
numbering the vertexes of each polygon in the target scene model;
assigning different color values to the vertices of the respective polygons based on the numbers of the vertices of the respective polygons;
performing vertex coloring rendering on the target scene model based on the first camera view to obtain a vertex coloring rendering image corresponding to the target scene model;
acquiring visible vertexes in vertexes of each polygon based on color values on each pixel point in the vertex coloring rendering image;
and acquiring the target polygon based on visible vertexes of the polygons.
11. The method of claim 7, wherein generating model visibility information corresponding to the set of high probability perspectives comprises:
based on the polygon visibility array, sorting the polygons of each scene model; polygons visible under the high-probability visual angle set are continuous in the sequenced polygons of the scene models;
acquiring the polygon visible information of the unoccluded model part based on the polygon index numbering result; the polygon index numbering result is a result of sequentially numbering the indexes of the polygons of the sequenced scene models; the polygon visibility information contains an index interval of a polygon in the unoccluded model part;
acquiring the vertex visible information of the unoccluded model part based on the polygon vertex index numbering result; the polygon vertex index numbering result is a result of sequentially numbering indexes of vertexes in the polygons of the sequenced scene models; the vertex visibility information includes index intervals for polygon vertices in the unoccluded model portion;
and acquiring the polygon visible information of the unoccluded model part and the vertex visible information of the unoccluded model part as the model visibility information corresponding to the high-probability visual angle set.
12. A picture display apparatus, comprising:
the visual angle acquisition module is used for acquiring a target visual angle, wherein the target visual angle is a camera visual angle for observing a virtual scene; the virtual scene corresponds to a high-probability view angle set, and the high-probability view angle set comprises camera view angles with access probability larger than a probability threshold value in the virtual scene;
the visibility information acquisition module is used for responding to the fact that the target view angle belongs to the high-probability view angle set and acquiring model visibility information corresponding to the high-probability view angle set; the model visibility information is used to indicate portions of models of individual scene models in the virtual scene that are not occluded under the set of high probability perspectives;
a rendering module, configured to submit rendering data of the model portion indicated by the visibility information to a rendering component, so as to render a scene picture of the virtual scene through the rendering component; the sub-model information comprises rendering data for a model portion indicated by the visibility information;
and the display module is used for displaying the scene picture of the virtual scene.
13. An information generating apparatus, characterized in that the apparatus comprises:
the system comprises a visual angle set acquisition module, a visual angle setting module and a visual angle setting module, wherein the visual angle set acquisition module is used for acquiring a high-probability visual angle set corresponding to a virtual scene, and the high-probability visual angle set comprises camera visual angles of which the access probability in the virtual scene is greater than a probability threshold;
an indication information obtaining module, configured to determine, based on the high-probability view set, visible model part indication information, where the visible model part indication information is used to indicate a part of each scene model in the virtual scene that is not occluded under the high-probability view set;
the visibility information generation module is used for generating model visibility information corresponding to the high-probability visual angle set; the visibility information is used for indicating that the virtual scene display equipment submits rendering data of the model part indicated by the visibility information to a rendering component when the target visual angle belongs to the high-probability visual angle set; the target perspective is a camera perspective from which the virtual scene is viewed.
14. A computer device comprising a processor and a memory, wherein at least one computer program is stored in the memory, and the at least one computer program is loaded and executed by the processor to implement the screen presentation method according to any one of claims 1 to 4 or the information generation method according to any one of claims 5 to 11.
15. A computer-readable storage medium, wherein at least one computer program is stored in the computer-readable storage medium, and the at least one computer program is loaded and executed by a processor to implement the screen presentation method according to any one of claims 1 to 4 or the information generation method according to any one of claims 5 to 11.
CN202110805394.8A 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium Active CN113457161B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110805394.8A CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110805394.8A CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113457161A true CN113457161A (en) 2021-10-01
CN113457161B CN113457161B (en) 2024-02-13

Family

ID=77880685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110805394.8A Active CN113457161B (en) 2021-07-16 2021-07-16 Picture display method, information generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113457161B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429513A (en) * 2022-01-13 2022-05-03 腾讯科技(深圳)有限公司 Method and device for determining visible element, storage medium and electronic equipment
CN115729838A (en) * 2022-12-01 2023-03-03 网易(杭州)网络有限公司 Test method, device, electronic equipment and storage medium for scene rendering effect
WO2024067204A1 (en) * 2022-09-30 2024-04-04 腾讯科技(深圳)有限公司 Scene picture rendering method and apparatus, device, storage medium, and program product

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010012018A1 (en) * 1998-05-06 2001-08-09 Simon Hayhurst Occlusion culling for complex transparent scenes in computer generated graphics
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102254338A (en) * 2011-06-15 2011-11-23 西安交通大学 Automatic obtaining method of three-dimensional scene optimal view angle based on maximized visual information
CN102982159A (en) * 2012-12-05 2013-03-20 上海创图网络科技发展有限公司 Three-dimensional webpage multi-scenario fast switching method
US20150235410A1 (en) * 2014-02-20 2015-08-20 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN107230248A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Viewpoint selection in virtual 3D environment
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN111080762A (en) * 2019-12-26 2020-04-28 北京像素软件科技股份有限公司 Virtual model rendering method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010012018A1 (en) * 1998-05-06 2001-08-09 Simon Hayhurst Occlusion culling for complex transparent scenes in computer generated graphics
CN102163343A (en) * 2011-04-11 2011-08-24 西安交通大学 Three-dimensional model optimal viewpoint automatic obtaining method based on internet image
CN102254338A (en) * 2011-06-15 2011-11-23 西安交通大学 Automatic obtaining method of three-dimensional scene optimal view angle based on maximized visual information
CN102982159A (en) * 2012-12-05 2013-03-20 上海创图网络科技发展有限公司 Three-dimensional webpage multi-scenario fast switching method
US20150235410A1 (en) * 2014-02-20 2015-08-20 Samsung Electronics Co., Ltd. Image processing apparatus and method
CN107230248A (en) * 2016-03-24 2017-10-03 国立民用航空学院 Viewpoint selection in virtual 3D environment
CN108154548A (en) * 2017-12-06 2018-06-12 北京像素软件科技股份有限公司 Image rendering method and device
CN108257103A (en) * 2018-01-25 2018-07-06 网易(杭州)网络有限公司 Occlusion culling method, apparatus, processor and the terminal of scene of game
CN111080762A (en) * 2019-12-26 2020-04-28 北京像素软件科技股份有限公司 Virtual model rendering method and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114429513A (en) * 2022-01-13 2022-05-03 腾讯科技(深圳)有限公司 Method and device for determining visible element, storage medium and electronic equipment
WO2023134277A1 (en) * 2022-01-13 2023-07-20 腾讯科技(深圳)有限公司 Visible element determination method and apparatus, and storage medium and electronic device
CN114429513B (en) * 2022-01-13 2025-07-11 腾讯科技(深圳)有限公司 Visible element determination method and device, storage medium and electronic device
WO2024067204A1 (en) * 2022-09-30 2024-04-04 腾讯科技(深圳)有限公司 Scene picture rendering method and apparatus, device, storage medium, and program product
CN115729838A (en) * 2022-12-01 2023-03-03 网易(杭州)网络有限公司 Test method, device, electronic equipment and storage medium for scene rendering effect

Also Published As

Publication number Publication date
CN113457161B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
US10957082B2 (en) Method of and apparatus for processing graphics
US5579454A (en) Three dimensional graphics processing with pre-sorting of surface portions
KR101286318B1 (en) Displaying a visual representation of performance metrics for rendered graphics elements
CN113457161B (en) Picture display method, information generation method, device, equipment and storage medium
US6529207B1 (en) Identifying silhouette edges of objects to apply anti-aliasing
CN111986304B (en) Render scenes using a combination of ray tracing and rasterization
Wylie et al. Tetrahedral projection using vertex shaders
US10430996B2 (en) Graphics processing systems
US11341708B2 (en) Graphics processing
KR101267120B1 (en) Mapping graphics instructions to associated graphics data during performance analysis
US10665010B2 (en) Graphics processing systems
JP2004038926A (en) Texture map editing
US20100020069A1 (en) Partitioning-based performance analysis for graphics imaging
CN116670723A (en) System and method for high quality rendering of composite views of customized products
US10839600B2 (en) Graphics processing systems
GB2406252A (en) Generation of texture maps for use in 3D computer graphics
US20230230311A1 (en) Rendering Method and Apparatus, and Device
CN113593028A (en) Three-dimensional digital earth construction method for avionic display control
US10424106B1 (en) Scalable computer image synthesis
CN119169175A (en) Model rendering method, device, equipment and medium
US20050162435A1 (en) Image rendering with multi-level Z-buffers
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
HK40053926A (en) Picture display method, information generation method, device, equipment and storage medium
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
CN117710563A (en) Method for rasterizing-based differentiable renderer of semitransparent objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053926

Country of ref document: HK

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220214

Address after: 518102 201a-k49, 2nd floor, 101, 201a, 301, 401, building 1, yujingwan garden, Xin'an Sixth Road, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Applicant after: SHENZHEN TENCENT NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Applicant before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant