[go: up one dir, main page]

CN119169175A - Model rendering method, device, equipment and medium - Google Patents

Model rendering method, device, equipment and medium Download PDF

Info

Publication number
CN119169175A
CN119169175A CN202411284946.5A CN202411284946A CN119169175A CN 119169175 A CN119169175 A CN 119169175A CN 202411284946 A CN202411284946 A CN 202411284946A CN 119169175 A CN119169175 A CN 119169175A
Authority
CN
China
Prior art keywords
information
rendering
original
target
updated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411284946.5A
Other languages
Chinese (zh)
Inventor
赵雅静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aoxing Technology Co ltd
Original Assignee
Beijing Aoxing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aoxing Technology Co ltd filed Critical Beijing Aoxing Technology Co ltd
Priority to CN202411284946.5A priority Critical patent/CN119169175A/en
Publication of CN119169175A publication Critical patent/CN119169175A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)

Abstract

本申请实施例提供了一种模型渲染方法、装置、设备及介质,涉及三维模型技术领域。该方法包括:获取目标三维模型的原始模型信息集,原始模型信息集包括原始纹理信息集和原始几何信息集,目标三维模型用于渲染目标场景;从原始几何信息集中,选取与其他原始几何信息的相似度低于第一阈值且与其他原始几何信息的距离大于第二阈值的用于渲染目标场景的更新几何信息,以及,通过从原始纹理信息集中去除重复纹理块,获取用于渲染目标场景的更新纹理信息;对更新纹理信息和更新几何信息进行渲染,得到与目标三维模型对应的目标场景。由此,能够提高目标三维模型的加载速度,降低目标三维模型的渲染时间,提高目标三维模型的渲染效率。

The embodiments of the present application provide a model rendering method, device, equipment and medium, which relate to the field of three-dimensional model technology. The method includes: obtaining an original model information set of a target three-dimensional model, the original model information set including an original texture information set and an original geometric information set, and the target three-dimensional model is used to render a target scene; from the original geometric information set, selecting updated geometric information for rendering the target scene whose similarity with other original geometric information is lower than a first threshold and whose distance with other original geometric information is greater than a second threshold, and obtaining updated texture information for rendering the target scene by removing duplicate texture blocks from the original texture information set; rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model. In this way, the loading speed of the target three-dimensional model can be improved, the rendering time of the target three-dimensional model can be reduced, and the rendering efficiency of the target three-dimensional model can be improved.

Description

Model rendering method, device, equipment and medium
Technical Field
The present application relates to the field of three-dimensional model technologies, and in particular, to a method, an apparatus, a device, and a medium for rendering a model.
Background
With the dramatic improvement of the hardware performance of the terminal equipment, especially the remarkable enhancement of the core components such as an image processor (Graphics Processing Unit, GPU), a central processing unit (Central Processing Unit, CPU) and the like, the continuous optimization of the algorithm technology, the continuous improvement of the reading efficiency and the rendering efficiency of the three-dimensional model, and the more and more abundant and lifelike rendered scenes. Three-dimensional model technology has unprecedented application potential and great value in the fields of game entertainment, virtual reality, augmented reality and the like.
Currently, rendering of three-dimensional models relies primarily on a series of specialized tools and libraries, with OpenGL being one of the widely used graphical application programming interfaces (Application Programming Interface, APIs). The OpenGL not only supports the creation and rendering of two-dimensional and three-dimensional vector graphics, but also can easily drive the whole process from basic graphic construction to complex scene presentation, and has extremely high flexibility and efficiency.
However, openGL may involve the reading, parsing, and processing of a large amount of data when processing the loading and rendering tasks of the three-dimensional model, where the data amount is very large and the processing is complex, which easily results in a long loading time and rendering time, thereby affecting the overall experience of the user.
Disclosure of Invention
Based on the problems, the application provides a model rendering method, device, equipment and medium, which can improve the loading speed of a target three-dimensional model, reduce the rendering time of the target three-dimensional model and improve the rendering efficiency of the target three-dimensional model.
The embodiment of the application discloses the following technical scheme:
In a first aspect, the present application discloses a model rendering method, the method comprising:
Acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value, and the distance between the updated geometric information and the other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set;
Rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the method further comprises:
and if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold value, obtaining updated texture information for rendering the target scene by combining the first original texture information and the second original texture information.
Optionally, the rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model includes:
Determining distance information between the target three-dimensional model and a current camera viewpoint;
Selecting a detail level model matched with the distance information from a plurality of detail level models;
And rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model includes:
determining rendering load information of the target scene;
according to the rendering load information, adjusting the resolution of the updated texture information and/or adjusting the level of detail of the updated geometric information;
Rendering the adjusted updated texture information and/or the adjusted updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the acquiring the original model information set of the target three-dimensional model includes:
Determining a to-be-loaded range of the target three-dimensional model;
and acquiring an original model information set of the target three-dimensional model in the range to be loaded.
In a second aspect, the application discloses a model rendering device, which comprises an acquisition module, a selection module and a rendering module;
The acquisition module is used for acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture mapping information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
The selecting module is configured to select, from the original set of geometric information, updated geometric information for rendering the target scene, where a similarity to other original geometric information is lower than a first threshold and a distance from the other original geometric information is greater than a second threshold, and acquire updated texture information for rendering the target scene by removing repeated texture blocks from the original set of texture information;
And the rendering module is used for rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the selecting module is further configured to obtain updated texture information for rendering the target scene by merging the first original texture information and the second original texture information if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold.
Optionally, the rendering module comprises a first rendering sub-module, a second rendering sub-module and a third rendering sub-module;
The first rendering sub-module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
The second rendering sub-module is used for selecting a detail level model matched with the distance information from a plurality of detail level models;
And the third rendering sub-module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the rendering module comprises a fourth rendering module, a fifth rendering module and a sixth rendering module;
the fourth rendering module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
The fifth rendering module is used for selecting a detail level model matched with the distance information from a plurality of detail level models;
And the sixth rendering module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the acquisition module is specifically configured to determine a to-be-loaded range of the target three-dimensional model, and acquire an original model information set of the target three-dimensional model within the to-be-loaded range.
In a third aspect, the present application discloses a computer device comprising a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to execute the steps of the model rendering method according to the first aspect according to instructions in the program code.
In a fourth aspect, the present application discloses a computer readable storage medium for storing program code for performing the steps of the model rendering method according to the first aspect.
Compared with the prior art, the application has the following beneficial effects:
The embodiment of the application provides a model rendering method, device, equipment and medium, wherein the method comprises the steps of obtaining an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, and the target three-dimensional model is used for rendering a target scene; selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set; rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model. Therefore, by only loading the necessary parts (namely updating texture information and updating geometric information) in the target three-dimensional model, the data volume to be read and processed can be obviously reduced, and the data transmission time from a storage device (such as a hard disk or a network) to a memory is reduced, so that the loading speed of the target three-dimensional model is increased. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the application, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of a model rendering method according to an embodiment of the present application;
FIG. 2 is a flowchart of another model rendering method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a model rendering device according to an embodiment of the present application.
Detailed Description
As described above, openGL may involve reading, analyzing, and processing a large amount of data when processing the loading and rendering tasks of the three-dimensional model, where the data amount is very large and the processing is complex, which easily results in a long loading time and rendering time, thereby affecting the overall experience of the user.
The inventor has studied and proposed a model rendering method, apparatus and medium, this model rendering method is through only loading the necessary part in the three-dimensional model of goal (namely update the texture information and update the geometric information), can reduce the data bulk that needs to read and process notably, has reduced the data transmission time from storage device (such as hard disk or network) to the memory, thus has accelerated the loading speed of the three-dimensional model of goal. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, the flow chart of a model rendering method according to an embodiment of the present application is shown. The method comprises the following steps:
S101, acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information.
The original texture information set describes visual characteristics of the target three-dimensional model surface, including but not limited to color information and texture map information. Wherein the color information defines the colors of the respective parts of the target three-dimensional model. The color information can be simple pure color or gradient color or pattern color, and is used for distinguishing different parts of the target three-dimensional model or representing different material effects. Texture map information provides more complex surface details of the target three-dimensional model, such as texture (e.g., wood grain, stone grain, cloth texture, etc., used to simulate the surface texture of an object), reflectivity (used to simulate the reflection effects of the object surface), glossiness (used to define the smoothness of the object surface), etc., which can greatly enhance the realism of the rendered target scene.
The original set of geometric information describes the spatial shape and structure of the three-dimensional model of the object, including but not limited to vertex coordinate information and side information. Wherein the vertex coordinate information defines three-dimensional positions of respective vertices of the target three-dimensional model. Each vertex has a three-dimensional coordinate (X, Y, Z) that together form the three-dimensional shape of the three-dimensional model of the object. By adjusting the position of the vertices, the shape and structure of the target three-dimensional model can be changed. The side information defines how the vertices are connected to form the basic structure of the target three-dimensional model.
S102, selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set.
In some specific implementations, to reduce unnecessary data processing in the rendering process, thereby reducing the amount of computation in rendering and improving the rendering efficiency, updated texture information for rendering the target scene may be selected from the original texture information set.
Specifically, the original texture information set is first analyzed to find and identify repeated texture blocks therein. These repeated texture blocks may be identical texture blocks of texture, reflectivity, gloss, etc., or similar texture blocks that are visually indistinguishable. Once the duplicate texture blocks are identified, their excess copies are removed from the original texture information set, resulting in updated texture information for rendering the target scene. The updated texture information includes all the texture data that is necessary, without repetition, when rendering the target scene. Because redundant original texture information is removed, the updated texture information obtained by screening is smaller in data size and higher in loading speed, and meanwhile, memory and cache resources can be utilized more efficiently during rendering, so that the rendering efficiency is improved.
In some specific implementations, in order to reduce unnecessary data processing in the rendering process, thereby reducing the amount of computation in rendering and improving the rendering efficiency, the unique geometric information (i.e., the geometric information which is not highly similar to other geometric information and is far away from other geometric information) may be selected from the original geometric information set as updated geometric information in rendering the target scene.
Specifically, first, a first threshold value and a second threshold value are set. Wherein the first threshold is used to measure the degree of similarity between two geometric information. When the similarity of the two geometric information is below a first threshold, they are considered unique and need to be processed separately during the rendering process to preserve the details and diversity of the rendered target scene. When the similarity of two geometric information is higher than or equal to the first threshold, they are considered similar, so that only one geometric information may be rendered during the rendering process, or the geometric information may be optimized by other means (e.g., merging, instantiation, etc.), thereby improving the rendering efficiency.
The second threshold is used to measure the relative positional relationship of the geometric information in space and represents the maximum distance that can be accepted between the two geometric information. When the distance between the two geometric information is greater than the second threshold, they are considered to be uncorrelated or need to be considered in rendering the scene, i.e. need to be processed separately in the rendering process to preserve the details and diversity of the rendered target scene. When the distance between two geometry information is less than or equal to the second threshold, they are considered similar, so that only one geometry information may be rendered during the rendering process, or the geometry information may be optimized by other means (e.g., merging, instantiation, etc.), thereby improving the rendering efficiency.
Thus, if one geometry information has a similarity to all other geometry information below a first threshold and its distance from all other geometry information is greater than a second threshold, then this geometry information is considered as updated geometry information for rendering the target scene.
Therefore, through the screening of the texture information and the geometric information provided by the embodiment of the application, the calculated amount during rendering can be reduced, and the rendering efficiency is improved. Meanwhile, as the unique geometric information is reserved, the rendered target scene can be ensured to have good visual effect while keeping details and sense of reality.
It should be noted that, the present application is not limited to executing the precedence relationship of selecting the updated texture information for rendering the target scene from the original texture information set and executing the precedence relationship of selecting the updated geometry information for rendering the target scene from the original geometry information set.
And S103, rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
In summary, the embodiment of the present application provides a model rendering method, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometric information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Referring to fig. 2, a flowchart of another model rendering method according to an embodiment of the present application is shown.
It should be noted that, in the model rendering method disclosed by the application, the object three-dimensional model file can be read through the Assimp library to obtain the original model information of the object three-dimensional model. This is because the Assimp library is smaller in volume and more convenient to integrate into projects than the OpenGL or other large graphics library, assimp, without significantly increasing the volume of the final application. Also, the Assimp library supports multiple three-dimensional model file formats, which enable developers to easily load and process model data from different sources.
Furthermore, the Assimp library not only reads the model data, but also performs certain preprocessing, such as coordinate conversion, skeleton animation data analysis and the like, so that the data processing work in the subsequent rendering process is greatly simplified. The method is particularly suitable for processing complex scenes and large-scale data sets, can effectively reduce the burden of a CPU and improves the overall performance of software.
The method comprises the following steps:
S201, determining a to-be-loaded range of the target three-dimensional model.
Because the target three-dimensional model may be very large and complex, loading the entire target three-dimensional model at a time may result in problems such as insufficient memory, reduced rendering performance, and the like. Therefore, a reasonable target three-dimensional model to be loaded is generally required to be determined according to factors such as a current view angle, user interaction, scene complexity and the like.
Specifically, determining the range to be loaded of the target three-dimensional model according to the current viewing angle refers to determining which parts of the target three-dimensional model are located in or near the viewing cone of the camera according to the position and orientation of the current camera viewpoint, and the parts of the target three-dimensional model need to be loaded preferentially.
Determining the scope to be loaded of the target three-dimensional model according to user interaction refers to predicting which parts of the target three-dimensional model may enter the field of view of the user according to the operation (such as rotation, scaling, translation and the like) of the user, wherein the parts of the target three-dimensional model need to be loaded preferentially.
Determining the scope to load of the target three-dimensional model according to scene complexity refers to that for complex scenes, it may be necessary to assign priorities to load the target three-dimensional model according to importance and urgency of the scene to ensure that critical portions of the target three-dimensional model are loaded first.
It should be noted that, the present application is not limited to a specific method for determining the to-be-loaded range of the target three-dimensional model.
S202, acquiring an original model information set of a target three-dimensional model in a range to be loaded, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information.
In some specific implementations, in order to improve the rendering efficiency and the response speed, part of the original model information of the target three-dimensional model may be selectively loaded according to actual requirements. For example, if the current scene only needs to show the appearance of the static target three-dimensional model, and does not need to process complex functions such as animation or physical simulation, only the original texture information set and the original geometric information set of the target three-dimensional model can be obtained, and data (such as the original animation information set and the original physical simulation information set) related to the complex functions are ignored, so that memory occupation is reduced, and rendering performance is improved.
In other specific implementations, in order to reduce memory occupation, reduce storage requirements, and optimize network transmission bandwidth, the obtained original texture information set may be compressed, so as to avoid the problem of insufficient memory, and increase delay and bandwidth consumption.
It is understood that the step S202 is similar to the step S101, and will not be described here.
And S203, acquiring updated texture information for rendering the target scene by removing the repeated texture blocks from the original texture information set.
It is understood that the step S203 is similar to the step S102, and will not be described here.
S204, selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set.
It is understood that the step S203 is similar to the step S102, and will not be described here.
S205, if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold value, obtaining updated texture information for rendering the target scene by combining the first original texture information and the second original texture information.
In order to reduce the memory occupation during rendering and improve the rendering performance, similar original texture information can be detected and combined. Specifically, the similarity between the first original texture information and the second original texture information in the original texture information set is first checked. Such similarity is typically calculated by comparing pixel data, color distribution, texture features, etc. of the two original texture information. If the calculated similarity is higher than the third threshold, the two original texture information are sufficiently similar in vision, a merging process can be performed, and the merged texture information is taken as updated texture information for rendering the target scene.
It will be appreciated that in most cases, merging similar texture information does not have a significant impact on the final rendering effect, as they are already sufficiently close in visual sense. Therefore, by combining similar original texture information, the quantity of updated texture information stored in the memory can be reduced, so that memory resources are saved.
And S206, determining the distance information between the target three-dimensional model and the current camera viewpoint.
In some specific implementations, a level of detail (Level ofDetail, LOD) technique may also be applied to select a level of detail model of different complexity according to distance information of the target three-dimensional model from the observer (i.e., the current camera viewpoint), achieving the technical effect of reducing the amount of computation while maintaining the visual quality.
S207, selecting a detail level model matched with the distance information from a plurality of detail level models.
And selecting a best matching detail level model from the predefined multiple detail level models according to the distance information obtained in the step S206.
In particular, when the distance information is large (i.e., the target three-dimensional model is far from the current camera viewpoint), a lower level of detail model (i.e., simpler, fewer polygons) is typically selected. This is because the visual details of distant objects are greatly simplified, and the use of high detail models not only wastes computational resources, but may also lead to visual unrealistic sensations due to perspective effects. When the distance information is small (i.e., the target three-dimensional model is closer to the current camera viewpoint), a higher level of detail model (i.e., more complex, more polygons) is typically selected, thereby ensuring that the target three-dimensional model provides enough detail to maintain visual realism and clarity when viewed at close distances.
Thus, it can be ensured that a simplified version of the level of detail model is sufficient to provide sufficient visual information while reducing the rendering burden when the object to be rendered is far, while the level of detail model can provide finer detail when the object to be rendered is near.
And S208, rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the rendering load information of the target scene can also be dynamically evaluated by analyzing the usage rate of the GPU during the rendering process. The rendering load information refers to a measure of computing resources and time required for rendering a target scene, and depends on the complexity of a target three-bit model of the target scene (such as the number of top points, texture resolution, material quantity, etc.), the complexity of illumination and shadow calculation, the resolution of views (i.e. the number of pixels of rendering output), and the frame rate requirement during real-time rendering. If the rendering load value is high, attempts may be made to reduce the resolution of the texture to reduce the memory usage of the GPU and the texture sampling time. Conversely, if the rendering load is lower and performance allows, texture resolution may be increased to improve image quality. Or if the rendering load value is higher, an attempt may be made to select a lower level of detail (i.e., simpler, fewer polygons) level of detail model, thereby conserving computational resources. Conversely, if the rendering load is lower and performance allows, a higher level of detail (i.e., more complex, more polygons) model may be selected, thereby ensuring that the target three-dimensional model provides enough detail to maintain visual realism and clarity. And then rendering the target scene by using the adjusted updated texture information and/or the adjusted updated geometric information. By the adjustment, the rendering load can be reduced as much as possible on the premise of keeping the visual effect acceptable, and the rendering efficiency and performance are improved.
It will be appreciated that there is also a need in the rendering process to be able to respond to various operations of the user, such as mouse dragging, scroll wheel scrolling, etc., to generate a target scene desired by the user. For example, a user may wish to change the viewing angle by dragging a mouse, or zoom in or out by scrolling a scroll wheel, and the distance between the target scene corresponding to the target three-dimensional model, so that the camera viewpoint and/or rendering parameters need to be updated in real-time according to these user operations to ensure that the rendered scene is consistent with the user's expectations.
In summary, the embodiment of the present application provides a model rendering method, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometric information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Referring to fig. 3, the present application provides a model rendering device. The model rendering device 300 comprises an acquisition module 301, a selection module 302 and a rendering module 303.
An obtaining module 301, configured to obtain an original model information set of a target three-dimensional model, where the original model information set includes an original texture information set and an original geometry information set, and the target three-dimensional model is used for rendering a target scene, the original texture information set includes at least one of color information and texture map information, and the original geometry information set includes at least one of vertex coordinate information and side information;
A selection module 302, configured to select, from the original set of geometric information, updated geometric information for rendering the target scene that has a similarity with other original geometric information lower than a first threshold and a distance from other original geometric information greater than a second threshold, and obtain updated texture information for rendering the target scene by removing the repeated texture blocks from the original set of texture information;
and the rendering module 303 is configured to render the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the selecting module 302 is further configured to obtain updated texture information for rendering the target scene by merging the first original texture information and the second original texture information if the similarity between the first original texture information and the second original texture information in the set of original texture information is higher than a third threshold.
In some specific implementations, the rendering module 303 includes a first rendering sub-module, a second rendering sub-module, and a third rendering sub-module;
the first rendering sub-module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
the second rendering sub-module is used for selecting a detail level model matched with the distance information from the detail level models;
And the third rendering sub-module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the rendering module 303 includes a fourth rendering module, a fifth rendering module, and a sixth rendering module;
The fourth rendering module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
a fifth rendering module, configured to select a detail level model that matches the distance information from a plurality of detail level models;
And the sixth rendering module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the obtaining module 301 is specifically configured to determine a to-be-loaded range of the target three-dimensional model, and obtain an original model information set of the target three-dimensional model within the to-be-loaded range.
In summary, the embodiment of the present application provides a model rendering apparatus, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometry information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
The embodiment of the application also provides corresponding computer equipment and a computer storage medium, which are used for realizing the model rendering method provided by the embodiment of the application.
The computer device comprises a memory for storing instructions or code and a processor for executing the instructions or code to cause the device to perform a model rendering method according to any of the embodiments of the present application.
The computer storage medium has code stored therein, and when the code is executed, the apparatus for executing the code implements the computer method according to any of the embodiments of the present application.
The "first" and "second" in the names of "first", "second" (where present) and the like in the embodiments of the present application are used for name identification only, and do not represent the first and second in sequence.
From the above description of embodiments, it will be apparent to those skilled in the art that all or part of the steps of the above described example methods may be implemented in software plus general hardware platforms. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network communication device such as a router) to perform the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The above-described apparatus and system embodiments are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements illustrated as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.

Claims (10)

1. A method of model rendering, the method comprising:
Acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value, and the distance between the updated geometric information and the other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set;
Rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
2. The method according to claim 1, wherein the method further comprises:
and if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold value, obtaining updated texture information for rendering the target scene by combining the first original texture information and the second original texture information.
3. The method of claim 1, wherein rendering the updated texture information and the updated geometry information to obtain a target scene corresponding to the target three-dimensional model comprises:
Determining distance information between the target three-dimensional model and a current camera viewpoint;
Selecting a detail level model matched with the distance information from a plurality of detail level models;
And rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
4. The method of claim 1, wherein rendering the updated texture information and the updated geometry information to obtain a target scene corresponding to the target three-dimensional model comprises:
determining rendering load information of the target scene;
according to the rendering load information, adjusting the resolution of the updated texture information and/or adjusting the level of detail of the updated geometric information;
Rendering the adjusted updated texture information and/or the adjusted updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
5. The method of claim 1, wherein the obtaining the original model information set of the target three-dimensional model comprises:
Determining a to-be-loaded range of the target three-dimensional model;
and acquiring an original model information set of the target three-dimensional model in the range to be loaded.
6. The model rendering device is characterized by comprising an acquisition module, a selection module and a rendering module;
The acquisition module is used for acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture mapping information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
The selecting module is configured to select, from the original set of geometric information, updated geometric information for rendering the target scene, where a similarity to other original geometric information is lower than a first threshold and a distance from the other original geometric information is greater than a second threshold, and acquire updated texture information for rendering the target scene by removing repeated texture blocks from the original set of texture information;
And the rendering module is used for rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
7. The apparatus of claim 6, wherein the selection module is further configured to:
And selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value, and the distance between the updated geometric information and the other original geometric information is greater than a second threshold value, from the original geometric information set.
8. The apparatus of claim 6, wherein the rendering module comprises a first rendering sub-module, a second rendering sub-module, and a third rendering sub-module;
The first rendering sub-module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
The second rendering sub-module is used for selecting a detail level model matched with the distance information from a plurality of detail level models;
And the third rendering sub-module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
9. A computer device, the device comprising a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to perform the steps of the model rendering method of any one of claims 1 to 5 according to instructions in the program code.
10. A computer readable storage medium for storing program code for performing the steps of the model rendering method of any one of claims 1 to 5.
CN202411284946.5A 2024-09-13 2024-09-13 Model rendering method, device, equipment and medium Pending CN119169175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411284946.5A CN119169175A (en) 2024-09-13 2024-09-13 Model rendering method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411284946.5A CN119169175A (en) 2024-09-13 2024-09-13 Model rendering method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN119169175A true CN119169175A (en) 2024-12-20

Family

ID=93885111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411284946.5A Pending CN119169175A (en) 2024-09-13 2024-09-13 Model rendering method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN119169175A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119741462A (en) * 2025-03-06 2025-04-01 中国计量大学 A selective 3D rendering method based on three-dimensional scenes

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119741462A (en) * 2025-03-06 2025-04-01 中国计量大学 A selective 3D rendering method based on three-dimensional scenes
CN119741462B (en) * 2025-03-06 2025-06-17 中国计量大学 A selective 3D rendering method based on three-dimensional scenes

Similar Documents

Publication Publication Date Title
US10600167B2 (en) Performing spatiotemporal filtering
US8223149B2 (en) Cone-culled soft shadows
JP3184327B2 (en) Three-dimensional graphics processing method and apparatus
US6650327B1 (en) Display system having floating point rasterization and floating point framebuffering
US8111264B2 (en) Method of and system for non-uniform image enhancement
US8134556B2 (en) Method and apparatus for real-time 3D viewer with ray trace on demand
US20060256112A1 (en) Statistical rendering acceleration
US9684997B2 (en) Efficient rendering of volumetric elements
US20100045670A1 (en) Systems and Methods for Rendering Three-Dimensional Objects
US7876332B1 (en) Shader that conditionally updates a framebuffer in a computer graphics system
US7064755B2 (en) System and method for implementing shadows using pre-computed textures
US11989807B2 (en) Rendering scalable raster content
US6791544B1 (en) Shadow rendering system and method
JP7735518B2 (en) Method and system for generating polygon meshes that approximate surfaces using root finding and iteration on mesh vertex positions - Patents.com
EP3211601B1 (en) Rendering the global illumination of a 3d scene
US8547395B1 (en) Writing coverage information to a framebuffer in a computer graphics system
KR20080018404A (en) Computer-readable recording medium that stores background creation programs for game production
CN119169175A (en) Model rendering method, device, equipment and medium
US7817165B1 (en) Selecting real sample locations for ownership of virtual sample locations in a computer graphics system
US11776179B2 (en) Rendering scalable multicolored vector content
JP5864474B2 (en) Image processing apparatus and image processing method for processing graphics by dividing space
CN119107399B (en) Shadow rendering method and device based on 2D image
US20240320903A1 (en) Methods and systems for generating enhanced light texture data
WO2023184139A1 (en) Methods and systems for rendering three-dimensional scenes
Schütz¹ et al. Splatshop: Efficiently Editing Large Gaussian Splat Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination