WO2024102165A1 - Methods and systems for rendering video graphics - Google Patents
Methods and systems for rendering video graphics Download PDFInfo
- Publication number
- WO2024102165A1 WO2024102165A1 PCT/US2022/082372 US2022082372W WO2024102165A1 WO 2024102165 A1 WO2024102165 A1 WO 2024102165A1 US 2022082372 W US2022082372 W US 2022082372W WO 2024102165 A1 WO2024102165 A1 WO 2024102165A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ray
- data
- scene
- vertex
- intersection position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Definitions
- the present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method that utilizes noise-free rendering that optimizes secondary ray tracing. There are other embodiments as well.
- Embodiments of the present invention can be implemented in conjunction with existing systems and processes.
- a rendering system configuration and its related methods according to the present invention can be used in a wide variety of systems, including virtual reality (VR) systems, mobile devices, and the like.
- VR virtual reality
- various techniques according to the present invention can be adopted into existing systems via integrated circuit fabrication, operating software, and application programming interfaces (APIs). There are other benefits as well.
- APIs application programming interfaces
- a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
- One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
- One general aspect includes a method for rendering refractive transparent objects in real-time video graphics using ray tracing.
- the method also includes receiving a plurality of graphics data associated with a three- dimensional (3D) scene including all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene.
- the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data.
- the method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
- the method also includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object.
- the method also includes casting a secondary ray from the first intersection position within a fragment shader.
- the method also includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
- the method also includes accessing an environment map associated with the 3D scene.
- the method also includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
- the method also includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
- the method also includes shading the first pixel covering the first intersection position based at least on the secondary ray.
- the method also includes storing the first pixel in a frame buffer.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features.
- the secondary ray may be cast using hardware-accelerated in-line ray tracing.
- the method may include interpolating render data through the rasterization of objects visible to a camera, the objects including the first object.
- the method may include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value.
- the method may include storing a maximum roughness value across a hits path associated with the implicit primary ray.
- the method may include determining whether to skip, reflect, or refract the secondary ray by comparing the Fresnel value against a predetermined threshold Fresnel value.
- the secondary ray may be cast in a refracted or reflected direction from the first intersection position.
- the system may include permanent storage, which holds an application that includes executable instructions.
- the system also includes volatile memory, which holds data used by the application when it is run.
- the system also includes a processor coupled to the storage and the memory, the processor being configured to: execute application instructions, generate a plurality of graphics data associated with a three-dimensional (3D) scene including at least a first object, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data; determine a roughness value of the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; cast a secondary ray within a fragment shader from a first intersection position of the first object and an implicit primary
- Implementations may include one or more of the following features.
- the system where the processor may include a central processing unit (CPU) and a graphics processing unit (GPU).
- the memory is shared by the CPU and the GPU.
- the memory may include a frame buffer for storing the first pixel.
- One general aspect includes a method for generating a video with ray tracing.
- the method includes generating a three-dimensional (3D) scene made up of objects.
- the method also includes receiving a plurality of graphics data associated with each object in the 3D scene, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in each object.
- the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data.
- the method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object in order to approximate a primary ray cast.
- the method also includes casting a secondary ray from the first point within a fragment shader.
- the method also includes calculating a Fresnel value associated with the first point and the primary ray in the fragment shader.
- the method also includes determining whether to reflect or refract the secondary ray from the first point using the fragment shader based at least on the Fresnel value and the roughness value.
- the method also includes shading the first pixel covering the first point based at least on the secondary ray.
- Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
- Implementations may include one or more of the following features.
- the method may include accessing an environment map associated with the 3d scene.
- the method may include providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
- the 3D scene is generated using a central processing unit.
- the vertex shader and the fragment shader are processed using a graphics processing unit.
- the central processing unit and the graphics processing unit share a memory.
- the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
- Figure l is a simplified diagram illustrating a system configured for rendering video graphics according to embodiments of the present invention.
- Figure 2 is a simplified flow diagram illustrating a conventional forward pipeline for rendering video graphics.
- Figure 3 is a simplified flow diagram illustrating a conventional hybrid pipeline for rendering video graphics.
- Figure 4 is a simplified flow diagram illustrating in a rendering system according to embodiments of the present invention.
- Figure 5 is a simplified flow diagram illustrating a method for rendering video graphics according to embodiments of the present invention.
- Figure 6 is a simplified flow diagram illustrating a method for generating a video according to embodiments of the present invention.
- the present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method and system that utilizes noise-free rendering that optimizes secondary ray tracing.
- the present invention can be configured for real-time applications (RTAs), such as video conferencing, video gaming, and virtual reality (VR). There are other embodiments as well.
- RTAs real-time applications
- VR virtual reality
- the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
- any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6.
- the use of “step of’ or “act of’ in the Claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
- FIG. 1 is a simplified block diagram illustrating a mobile system 100 for rendering video graphics according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
- the mobile system 100 can be configured within housing 110 and can include camera device 120 (or other image or video capturing device), processor device 130, memory device 140 (e.g., volatile memory storage), and storage device 150 (e.g., permanent memory storage).
- Camera 120 can be mounted on housing 110 and be configured to capture an input image.
- the input image can be stored in memory 140, which may include a randomaccess memory (RAM) device, an image/video buffer device, a frame buffer, or the like.
- RAM randomaccess memory
- Various software, executable instructions, and files can be stored in storage device 150, which may include read-only memory (ROM), a hard drive, or the like.
- Processor 130 may be coupled to each of the previously mentioned components and be configured to communicate between these components.
- processor 130 includes a central processing unit (CPU), a network processing unit (NPU), or the like.
- System 100 may also include graphics processing unit (GPU) 132 coupled to at least processor 130 and memory 140.
- memory 140 is configured to be shared between processor 130 (e.g., CPU) and GPU 132, and is configured to hold data used by an application when it is run. As memory 140 is shared, it is important to use memory 140 efficiently. For example, a high memory usage by GPU 132 may negatively impact system performance.
- shared memory or RAM can be used in a variety of ways, depending on the specific needs of the CPU and GPU. The amount of shared RAM that is available in a device can have a significant impact on its performance, as it determines how much data can be stored and accessed quickly by the CPU and GPU.
- memory 140 is configured as a tile-based memory.
- System 100 may also include user interface 160 and network interface 170.
- User interface 160 may include display region 162 that is configured to display text, images, videos, rendered graphics, interactive elements, etc.
- Display 162 may be coupled to the GPU 132 and may also be configured to display at a refresh rate of at least 24 frames per second.
- Display region 162 may comprise a touchscreen display (e.g., in a mobile device, tablet, etc.)
- user interface 160 may also include touch interface 164 for receiving user input (e.g., keyboard or keypad in a mobile device, laptop, or other computing devices).
- User interface 160 may be used in real-time applications (RTAs), such as multimedia streaming, video conferencing, navigation, video games, and the like.
- RTAs real-time applications
- Network interface 170 may be configured to transmit and receive instructions and files (e.g., using Wi-Fi, Bluetooth, Ethernet, etc.) for graphic rendering.
- network interface 170 may be configured to compress or down-sample images for transmission or further processing.
- Network interface 170 may be configured to send one or more images to a server for OCR.
- Processor 130 may be coupled to and configured to communicate between user interface 160, network interface 170, and/or other interfaces.
- processor 130 and GPU 132 may be configured to perform steps for rendering video graphics, which can include those related to the executable instructions stored in storage 150.
- Processor 130 may be configured to execute application instructions and generate a plurality of graphics data associated with a 3D scene including at least a first object.
- the plurality of graphics data can include a plurality of vertex data associated with a plurality of vertices in the 3D scene (e.g., for each object).
- GPU 132 may be configured to generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader.
- the plurality of primitive data may include a position data and a material data.
- GPU 132 may be configured to determine a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
- GPU 130 may be configured to cast a primary ray in a first direction through the first pixel onto a first point of the first object. Or, the GPU 130 may be configured to skip the primary ray cast by computing the primary hit using data interpolated by rasterization.
- the GPU 132 can also be configured to cast a secondary ray from the first point or the primary hit point within a fragment shader. Then, GPU 132 can be configured to calculate a Fresnel value associated with the first point and the primary ray in the fragment shader. Using at least the Fresnel value and the roughness value, the GPU 132 may be configured to determine whether to reflect or refract the secondary ray from the first point or the primary hit point using the fragment shader.
- the GPU 132 may be configured to shade the first pixel covering the first point or the primary hit point based at least on the secondary ray, and the first pixel can then be stored in memory 140.
- Other embodiments of this system include corresponding computer systems, apparatuses, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Further details of methods are discussed with reference to the following figures.
- FIG. 2 is a simplified flow diagram illustrating conventional forward pipeline 200 for rendering video graphics.
- forward pipeline 200 i.e., includes vertex shader 210 followed by fragment shader 220.
- a CPU provides graphics data of a 3D scene (e.g., from memory, storage, over network, etc.) to a graphic card or a GPU.
- vertex shader 210 transforms objects in the 3D scene from object space to screen space. This process includes projecting the geometries of the objects and breaking them down into vertices, which are then transformed and split into fragments, or pixels.
- fragment shader 220 these pixels are shaded (e.g., colors, lighting, textures, etc.) before they are passed onto a display (e.g., screen of a smartphone, tablet, VR goggles, etc.).
- a display e.g., screen of a smartphone, tablet, VR goggles, etc.
- rendering effects are processed for every vertex and on every fragment in the visible scene for every light source.
- FIG. 3 is a simplified flow diagram illustrating a conventional deferred pipeline 300 for rendering video graphics.
- the prepass process 310 involves receiving the graphics data of the 3D scene from the CPU and generating a geometry buffer (G-buffer) with data needed for subsequent rendering passes, such as color, depth, normal, etc.
- a ray traced reflections pass 320 involves processing the G-buffer data to determine the reflections of the scene
- the ray traced shadows pass 330 involves processing the G-buffer data to determine the shadows of the scene.
- a denoising pass 340 removes the noise for pixels that were ray -traced and to blend across pixels that were not ray -traced.
- the main shading pass 350 the reflections and shadows and material evaluations are combined to produce the shaded output with each pixel color.
- the post pass 360 the shaded output is subject to additional rendering processes such as color grading, depth of field, etc.
- the deferred pipeline 300 reduces the total fragment count by only processing rendering effects based on unoccluded pixels. This is accomplished by breaking up the rendering process into multiple stages (i.e., passes) in which the color, depth, and normal of the objects in the 3D scene are written to separate buffers that are subsequently rendered together to produce the final rendered frame. Subsequent passes use depth values to skip rendering of occluded pixels when executing more complex lighting shaders.
- the deferred render pipeline approach reduces the complexity of any single shader compared to the forward render pipeline approach, but having multiple rendering passes requires greater memory bandwidth, which is especially problematic for many architectures today with limited and shared memory.
- Embodiments of the present invention provide methods and systems for graphics rendering implementing one or more optimization techniques to achieve real-time performance while minimizing memory bandwidth load.
- the present invention provides for a method and system using noise-free unified path tracing that combines ray-traced effects in-line with real-time approximations of physically-based rendering.
- These techniques can be performed by a GPU (such as GPU 132 of system 100 in Figure 1) configured within a graphic rendering system.
- the rendering system can be configured as a mobile device, such as a smartphone, a tablet, VR goggles, or the like.
- a mobile device such as a smartphone, a tablet, VR goggles, or the like.
- the techniques described herein may be implemented separately or in combination with each other and/or other conventional techniques depending upon the application.
- the present invention implements secondary ray optimization techniques. While primary rays render a target scene as directly perceived by the camera, secondary rays can be configured to render indirect light effects (e.g., reflections, refractions, shadows, etc.). Depending on the application, such indirect light effects may be rendered in a manner that mitigates the costs of ray casts, such as secondary ray casts, tertiary ray casts, and beyond.
- indirect light effects e.g., reflections, refractions, shadows, etc.
- indirect light effects may be rendered in a manner that mitigates the costs of ray casts, such as secondary ray casts, tertiary ray casts, and beyond.
- ray cost can be mitigated by using a designated cutoff threshold that restricts the number of ray casts based on a predetermined criteria for material properties, such as a roughness cutoff based on a roughness material property.
- This technique can be implemented to skip ray casts, but does not support configuring a ray cast to render more cheaply because it is limited given material value.
- the present invention can include tracking a minimum and/or maximum material property value along an entire path of a ray cast and using such values to determine a desired rendering quality. For example, a maximum roughness along the entire path may be used to determine a minimum necessary rendering quality. Using such path cutoff thresholds can enable the reduction of quality to the minimum necessary for the entire path.
- cutoff thresholds can be used for different material properties or certain properties may share the same cutoff threshold depending on the application.
- Other material properties can include metalness, opacity, gloss, index of refraction (IOR), and the like.
- previous material properties e.g., roughness
- the stored material property value can be replaced with a designated cutoff threshold for a material property value (e.g., maximum roughness along a path) to skip certain rendering features, as discussed previously. In such cases, the stored maximum or minimum material property values can still be used as needed for environment map mip sampling.
- the present invention can also use a stochastic path tracing pipeline which implements subsurface scattering in opaque objects and volumetric backscattering in transmissive objects, which can also be disabled based on designated cutoff thresholds (e.g., maximum roughness along the path indicates that processing secondary ray casts from a rough object to a smooth object is not needed for the desired quality of the final image).
- a stochastic path tracing pipeline which implements subsurface scattering in opaque objects and volumetric backscattering in transmissive objects, which can also be disabled based on designated cutoff thresholds (e.g., maximum roughness along the path indicates that processing secondary ray casts from a rough object to a smooth object is not needed for the desired quality of the final image).
- the color computation of the secondary ray cast can be simplified in the case of global illumination of a rough surface by using a simple opacity attenuated Lambertian diffuse calculation, or the like.
- the new ray process can make context-aware decisions to skip advanced effects, such as subsurface scattering, reflections, refractive transmission, etc. Since the previous roughness value is previously obtained for sampling the correct level of detail of a reflection map in the case of a ray miss, there is no additional cost in ray payload footprint to overload that roughness information to adaptively skip certain rendering effects.
- Additional ray optimizations to reduce bandwidth costs can be achieved by the use of a unified rendering architecture (e.g., using a monolithic shader).
- the monolithic shader code implements recursive rendering functions in an in-line configuration rather than in discrete ray-tracing shaders.
- the rendering process for certain effects can be handled by different rendering processes (e.g., rasterization, ray tracing, path tracing, etc.). Further details are discussed below.
- the present invention implements noise-free rendering and unified path tracing in a single shader. These features can enable more flexible and context-aware tuning of individual ray casts.
- the unified path tracing technique involves computing all rendering effects (e.g., lighting effects) in the same holistic shader. For a given 3D scene, a path tracing process can be performed in a single recursive function call that is repeated per-pixel to produce a rendered frame. Also, in-line ray tracing can be configured with such single recursive function calls to cast rays from inside a vertex, fragment, or compute shader.
- a basic rasterization pipeline can be used to compute primary rendering data for each pixel of a rendered frame (e.g., primary hit data per-pixel).
- the vertex shader can extract rendering properties from the 3D scene for rasterization, such as world-space position, world-space normal, world-space tangent, UV mapping, material ID, etc.
- the fragment shader can compute direct illumination of the primary hits using the material ID and dispatching a process to compute shadowing (e.g., rayQuery call).
- the fragment shader can also dispatch recursive compute processes (e.g., rayQuery calls) for indirect illumination of predetermined materials (e.g., glass, mirror, diffuse materials, etc.).
- the single-shader architecture provides opportunities for data reuse across different lighting effects without incurring the bandwidth cost of transferring data between render passes (which requires accessing main memory), as is common in hybrid rendering architectures for desktop applications. By reducing the number of render passes, this architecture is particularly suitable for tile-based memory architectures commonly used in mobile devices. Also, since all effects are rendered in the same shader, it is much easier to optimize the shader code by keeping shared data for only as long as needed to complete the computation of the effects using that data.
- effect-per-pass rendering pipelines can only enable effects globally (or globally for ray tracing as described previously), the embodiments of the present invention can effectively disable effects per-object, per-bounce based on previous roughness or other material property values.
- shadow rendering cases while the cost is still one ray cast per pixel, the shading of that ray can be completed more quickly. For example, at the second bound of a smooth reflector or refractor, it may still be necessary to capture the material model of reflected or refracted objects, but the same level of quality as desired for shading a primary hit may not be required.
- the rendering process can revert to rasterized shadows from secondary ray onwards, even if ray-traced shadows are enabled. This reduces the cost of shadows from one ray cast per pixel to one texture sample per pixel.
- the noise-free rendering technique involves configuring the rendering pipeline to use deterministic ray casts.
- the noise-free rendering pipeline is configured to support only delta lights (a.k.a. punctual lights) that are sampled deterministically. Delta lights can include directional lights, point lights, spotlights, and the like.
- This noise-free pipeline also mitigates costs by reducing the quality of secondary rays.
- recursive function calls for shadows e.g., rayQuery
- the shadowing can be calculated from rasterized shadows stored in shadow maps.
- direct illumination of objects over a range of roughness values with a roughness cut-off can be computed using a distribution function, such as a physically based microfacet bidirectional reflectance distribution function (BDRF) using a GGX distribution function, the Smith geometry shadowing term, and the Schlick approximation for the Fresnel term.
- BDRF physically based microfacet bidirectional reflectance distribution function
- GGX GGX distribution function
- Smith geometry shadowing term the Schlick approximation for the Fresnel term.
- computing indirect illumination in real-time with such distribution functions faces challenges due to the direction of the indirect light ray casts changes with the position of the point being rendered.
- Indirect illumination of materials above a roughness threshold can be achieved by using image-based lighting, or environment maps, using a split-sum approximation.
- the split-sum approximation uses pre-filtered environment maps and a 2D BRDF look-up table (LUT) to compute plausible reflections of the environment map in objects of varying roughness values using only two texture lookups at runtime.
- LUT 2D BRDF look-up table
- a separate reflection probe with the scene geometry can also be rerendered to address any geometry that might be occluding the environment map or any lights in the scene.
- the split-sum approximation can be applied to transparency rendering. After reflecting the incident direction across the view plane, that direction can be used to sample the environment map. The same LUT values can be used to calculate the reflection of the environment map on the back face. Further, the calculated reflection can be multiplied by albedo (i.e., characteristic color of an object) and blended with a diffuse term based on opacity to be used as a rough transmission contribution.
- albedo i.e., characteristic color of an object
- the noise-free design achieves suitable rendering results within a single frame.
- the rendering process can be configured for smooth reflection and transmission ray-traced effects.
- the rendering process can include rasterization to approximate low-frequency effects, such as rough reflection, rough transmission, soft shadows, and the like.
- the noise-free design does not require denoising or accumulation processes, the rendering process reduces the need to read intermediate rendering results from memory, which reduces the memory bandwidth load and is particularly suitable to mobile devices.
- the present invention implements Blended Fresnel Splitting to handle joint rendering of reflection and refraction rather than managing separate passes for each of these effects.
- rays can be cast both in the reflected and refracted directions and processed recursively.
- this introduces an exponential per-pixel performance cost. Even limiting the maximum bounce count per pixel is not sufficient to manage the cost for mobile applications.
- the rendering process can randomly select between the reflected and refracted direction based on the Fresnel value at the pixel.
- this technique uses cutoff thresholds to determine whether to cast a ray or use a non-ray-tracing approximation of the ray cast result (e.g., color).
- sample data may be measured to generate an approximation of the desired effect.
- a reflection probe can be used to give a low-resolution cube map of an object’s immediate surroundings, which can be sampled to provide an approximation of either transmitted or reflected color.
- cutoffs for transmission and reflection can be controlled separately or both effects can share the same cutoff.
- the rendering method can define an additional cutoff value against which the Fresnel term at each pixel is compared to avoid casting two rays per pixel.
- the method can include casting a reflection ray and sampling a cube map in the refracted direction.
- the method can include casting a refraction ray and sampling the cube map (i.e., environment map) in the reflected direction.
- the method can attenuate both the reflected and refracted ray casts based on the absolute difference between the pixel’s Fresnel term and the Fresnel cutoff. This attenuation can smooth the transition at the cutoff point at the expense of making both reflections and refractions appear more faded in the final image.
- a glass rendering recursive process can also include following the transmitted path except in the case of total internal reflection, which maintains the one-ray -per-hit cost of Blended Fresnel Splitting but without any skybox sampling or blending.
- Interactive-rate path tracers accumulate an image over multiple frames. As a consequence, they can randomize between reflection and refraction for every pixel in every frame and the results are smoothed by the accumulation step. However, for a real-time noise- free path tracer, both reflected and refracted images must be available from the first frame without any noise from randomization.
- the rendering process using Blended Fresnel Splitting can handle smooth transmissive objects using one ray cast and two environment map (e.g., skybox or reflection probe) samples.
- Figure 4 is a simplified flow diagram illustrating a noise-free unified path tracing rendering pipeline 400 according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
- path tracing pipeline 400 includes a pre-z (depth prepass) vertex shader 410 followed by a vertex shader 420 and fragment shader 430 corresponding to the holistic noise-free render pass.
- This rendering pipeline 400 is configured as a noise-free unified path tracer using a depth pre-pass, secondary ray optimization, and blended Fresnel splitting techniques.
- this pipeline leverages an important performance improvement of deferred pipelines which allows occluded fragments in the subsequent holistic lighting pass to be skipped upon failing depth testing.
- this pipeline 400 is configured to render non-primary ray casts processes using a material property cut-off threshold (e.g., maximum roughness) to adaptively disable effects per-object and per-bounce of a ray cast to achieve a desired rendering quality. Further, pipeline 400 implements Blended Fresnel Splitting to approximate the appearance of ray splitting at transmissive surfaces without casting two rays per pixel.
- a material property cut-off threshold e.g., maximum roughness
- the present invention provides a video graphics rendering system (such as system 100 if Figure 1) configured to implement the noise-free unified path tracer pipeline 400.
- This is can be configured as a mobile device (e.g., smartphone, tablet, VR goggles, etc.) having at least a processor configured to perform the methods discussed previously using executable code in a memory storage with instructions to perform the actions of these methods.
- Example method flows of the operation of such rendering systems are discussed with respect to Figures 5 and 6.
- Figure 5 is a simplified flow diagram illustrating a method 500 for rendering refractive transparent objects in real-time video graphics using ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
- One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
- method 500 of rendering objects can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 500 by executable code stored in a memory storage (e.g., permanent storage) of the system.
- method 500 can include step 502 of receiving a plurality of graphics data associated with a 3D scene including at least a first object (or including all object instantiations in the scene). This can include all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data includes at least a plurality of vertex data associated with a plurality of vertices in the 3D scene.
- the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader.
- the plurality of primitive data can include at least a position data and a material data.
- the method includes determining a roughness value of at least the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
- method 500 further includes interpolating render data through the rasterization of objects visible to a camera, in place of casting primary rays. This includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object.
- the method further includes storing a maximum roughness value across a hits path associated with the implicit primary ray.
- the method includes casting at least a secondary ray from the first intersection position within a fragment shader.
- the method can further include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value (i.e., secondary ray optimization).
- the secondary ray can be cast in either a reflected or a refracted direction from the first intersection position.
- the vertex shader and the fragment shader can be processed using a GPU, such as the GPU 132 in system 100 of Figure 1.
- the secondary ray is cast using hardware-accelerated in-line tracing.
- the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
- the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
- the determination of whether to skip, reflect, or refract the secondary ray can include comparing the Fresnel value against a predetermined threshold Fresnel value (i.e., Blended Fresnel Splitting).
- the method includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
- the method includes sampling an environment map to provide a consistent intermediate image used to blend between reflected and refracted images.
- the method includes attenuating the reflected and refracted ray casts following the Blended Fresnel Splitting for joint rendering and refraction for smooth transmissive objects.
- the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray.
- the method includes storing at least the first pixel in a frame buffer.
- the frame buffer is configured within a memory, such as the memory 140 in system 100 of Figure 1.
- method 500 includes transforming the 3D scene to a screen space, such as display 162 of system 100 of Figure 1.
- Figure 6 is a simplified flow diagram illustrating a method for generating a video with ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
- One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
- the method of generating a video can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 600 by executable code stored in a memory storage (e.g., permanent storage) of the system. As shown, method 600 can include step 602 of generating a 3D scene including at least a first object (or including all object instantiations in the scene). In an example, the 3D scene is generated by a CPU, such as the CPU 130 of system 100 in Figure 1. In step 604, the method includes receiving a plurality of graphics data associated the 3D scene. The plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene.
- the method includes determining a roughness value of at least the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object. This data is used to compute a primary ray direction and hit position without casting a primary ray.
- the method includes casting a secondary ray within a fragment shader from the first intersection position between the first object and the implicit primary ray determined from the rasterization.
- the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
- the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
- the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
Description
Claims
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202280101089.3A CN120153399A (en) | 2022-11-07 | 2022-12-23 | Method and system for rendering video graphics |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263423171P | 2022-11-07 | 2022-11-07 | |
| US63/423,171 | 2022-11-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024102165A1 true WO2024102165A1 (en) | 2024-05-16 |
Family
ID=91033156
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2022/082372 Ceased WO2024102165A1 (en) | 2022-11-07 | 2022-12-23 | Methods and systems for rendering video graphics |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN120153399A (en) |
| WO (1) | WO2024102165A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070206008A1 (en) * | 2000-02-25 | 2007-09-06 | The Research Foundation Of The State University Of New York | Apparatus and Method for Real-Time Volume Processing and Universal Three-Dimensional Rendering |
| US20180005432A1 (en) * | 2016-06-29 | 2018-01-04 | AR You Ready LTD. | Shading Using Multiple Texture Maps |
| US20210287419A1 (en) * | 2018-12-28 | 2021-09-16 | Intel Corporation | Speculative execution of hit and intersection shaders on programmable ray tracing architectures |
-
2022
- 2022-12-23 WO PCT/US2022/082372 patent/WO2024102165A1/en not_active Ceased
- 2022-12-23 CN CN202280101089.3A patent/CN120153399A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070206008A1 (en) * | 2000-02-25 | 2007-09-06 | The Research Foundation Of The State University Of New York | Apparatus and Method for Real-Time Volume Processing and Universal Three-Dimensional Rendering |
| US20180005432A1 (en) * | 2016-06-29 | 2018-01-04 | AR You Ready LTD. | Shading Using Multiple Texture Maps |
| US20210287419A1 (en) * | 2018-12-28 | 2021-09-16 | Intel Corporation | Speculative execution of hit and intersection shaders on programmable ray tracing architectures |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120153399A (en) | 2025-06-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20240257439A1 (en) | Reflection denoising in ray-tracing applications | |
| Szirmay-Kalos et al. | Approximate ray-tracing on the gpu with distance impostors | |
| Kaplanyan et al. | Cascaded light propagation volumes for real-time indirect illumination | |
| Lensing et al. | Instant indirect illumination for dynamic mixed reality scenes | |
| WO2022111619A1 (en) | Image processing method and related apparatus | |
| US20100060640A1 (en) | Interactive atmosphere - active environmental rendering | |
| US8368694B2 (en) | Efficient rendering of multiple frame buffers with independent ray-tracing parameters | |
| Scherzer et al. | Temporal coherence methods in real‐time rendering | |
| US10049486B2 (en) | Sparse rasterization | |
| US20150084981A1 (en) | Anti-Aliasing for Graphics Hardware | |
| KR102006584B1 (en) | Dynamic switching between rate depth testing and convex depth testing | |
| KR20070114224A (en) | Tiled, prefetched, and cached depth buffer | |
| US20250371799A1 (en) | Illumination Rendering Methods and Systems | |
| Kilgard | Improving shadows and reflections via the stencil buffer | |
| JP5731566B2 (en) | Information processing apparatus, control method, and recording medium | |
| Engelhardt et al. | Epipolar sampling for shadows and crepuscular rays in participating media with single scattering | |
| US8872827B2 (en) | Shadow softening graphics processing unit and method | |
| Hu et al. | Interactive approximate rendering of reflections, refractions, and caustics | |
| Sintorn et al. | Real-time approximate sorting for self shadowing and transparency in hair rendering | |
| CN115298699A (en) | Rendering using shadow information | |
| US20140160124A1 (en) | Visible polygon data structure and method of use thereof | |
| US20230063422A1 (en) | Techniques for rendering signed distance functions | |
| WO2019231668A1 (en) | Casting shadows using silhouettes | |
| Boksansky et al. | Ray traced shadows: maintaining real-time frame rates | |
| WO2024102165A1 (en) | Methods and systems for rendering video graphics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22965368 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202280101089.3 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 202280101089.3 Country of ref document: CN |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22965368 Country of ref document: EP Kind code of ref document: A1 |