[go: up one dir, main page]

WO2024102165A1 - Methods and systems for rendering video graphics - Google Patents

Methods and systems for rendering video graphics Download PDF

Info

Publication number
WO2024102165A1
WO2024102165A1 PCT/US2022/082372 US2022082372W WO2024102165A1 WO 2024102165 A1 WO2024102165 A1 WO 2024102165A1 US 2022082372 W US2022082372 W US 2022082372W WO 2024102165 A1 WO2024102165 A1 WO 2024102165A1
Authority
WO
WIPO (PCT)
Prior art keywords
ray
data
scene
vertex
intersection position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2022/082372
Other languages
French (fr)
Inventor
Chen Li
Paula NING
Javier SANDOVAL
Hongyu Sun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innopeak Technology Inc
Original Assignee
Innopeak Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology Inc filed Critical Innopeak Technology Inc
Priority to CN202280101089.3A priority Critical patent/CN120153399A/en
Publication of WO2024102165A1 publication Critical patent/WO2024102165A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Definitions

  • the present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method that utilizes noise-free rendering that optimizes secondary ray tracing. There are other embodiments as well.
  • Embodiments of the present invention can be implemented in conjunction with existing systems and processes.
  • a rendering system configuration and its related methods according to the present invention can be used in a wide variety of systems, including virtual reality (VR) systems, mobile devices, and the like.
  • VR virtual reality
  • various techniques according to the present invention can be adopted into existing systems via integrated circuit fabrication, operating software, and application programming interfaces (APIs). There are other benefits as well.
  • APIs application programming interfaces
  • a system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions.
  • One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions.
  • One general aspect includes a method for rendering refractive transparent objects in real-time video graphics using ray tracing.
  • the method also includes receiving a plurality of graphics data associated with a three- dimensional (3D) scene including all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene.
  • the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data.
  • the method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
  • the method also includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object.
  • the method also includes casting a secondary ray from the first intersection position within a fragment shader.
  • the method also includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
  • the method also includes accessing an environment map associated with the 3D scene.
  • the method also includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
  • the method also includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
  • the method also includes shading the first pixel covering the first intersection position based at least on the secondary ray.
  • the method also includes storing the first pixel in a frame buffer.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • Implementations may include one or more of the following features.
  • the secondary ray may be cast using hardware-accelerated in-line ray tracing.
  • the method may include interpolating render data through the rasterization of objects visible to a camera, the objects including the first object.
  • the method may include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value.
  • the method may include storing a maximum roughness value across a hits path associated with the implicit primary ray.
  • the method may include determining whether to skip, reflect, or refract the secondary ray by comparing the Fresnel value against a predetermined threshold Fresnel value.
  • the secondary ray may be cast in a refracted or reflected direction from the first intersection position.
  • the system may include permanent storage, which holds an application that includes executable instructions.
  • the system also includes volatile memory, which holds data used by the application when it is run.
  • the system also includes a processor coupled to the storage and the memory, the processor being configured to: execute application instructions, generate a plurality of graphics data associated with a three-dimensional (3D) scene including at least a first object, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data; determine a roughness value of the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; cast a secondary ray within a fragment shader from a first intersection position of the first object and an implicit primary
  • Implementations may include one or more of the following features.
  • the system where the processor may include a central processing unit (CPU) and a graphics processing unit (GPU).
  • the memory is shared by the CPU and the GPU.
  • the memory may include a frame buffer for storing the first pixel.
  • One general aspect includes a method for generating a video with ray tracing.
  • the method includes generating a three-dimensional (3D) scene made up of objects.
  • the method also includes receiving a plurality of graphics data associated with each object in the 3D scene, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in each object.
  • the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data.
  • the method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object in order to approximate a primary ray cast.
  • the method also includes casting a secondary ray from the first point within a fragment shader.
  • the method also includes calculating a Fresnel value associated with the first point and the primary ray in the fragment shader.
  • the method also includes determining whether to reflect or refract the secondary ray from the first point using the fragment shader based at least on the Fresnel value and the roughness value.
  • the method also includes shading the first pixel covering the first point based at least on the secondary ray.
  • Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
  • Implementations may include one or more of the following features.
  • the method may include accessing an environment map associated with the 3d scene.
  • the method may include providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
  • the 3D scene is generated using a central processing unit.
  • the vertex shader and the fragment shader are processed using a graphics processing unit.
  • the central processing unit and the graphics processing unit share a memory.
  • the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
  • Figure l is a simplified diagram illustrating a system configured for rendering video graphics according to embodiments of the present invention.
  • Figure 2 is a simplified flow diagram illustrating a conventional forward pipeline for rendering video graphics.
  • Figure 3 is a simplified flow diagram illustrating a conventional hybrid pipeline for rendering video graphics.
  • Figure 4 is a simplified flow diagram illustrating in a rendering system according to embodiments of the present invention.
  • Figure 5 is a simplified flow diagram illustrating a method for rendering video graphics according to embodiments of the present invention.
  • Figure 6 is a simplified flow diagram illustrating a method for generating a video according to embodiments of the present invention.
  • the present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method and system that utilizes noise-free rendering that optimizes secondary ray tracing.
  • the present invention can be configured for real-time applications (RTAs), such as video conferencing, video gaming, and virtual reality (VR). There are other embodiments as well.
  • RTAs real-time applications
  • VR virtual reality
  • the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
  • any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6.
  • the use of “step of’ or “act of’ in the Claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
  • FIG. 1 is a simplified block diagram illustrating a mobile system 100 for rendering video graphics according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • the mobile system 100 can be configured within housing 110 and can include camera device 120 (or other image or video capturing device), processor device 130, memory device 140 (e.g., volatile memory storage), and storage device 150 (e.g., permanent memory storage).
  • Camera 120 can be mounted on housing 110 and be configured to capture an input image.
  • the input image can be stored in memory 140, which may include a randomaccess memory (RAM) device, an image/video buffer device, a frame buffer, or the like.
  • RAM randomaccess memory
  • Various software, executable instructions, and files can be stored in storage device 150, which may include read-only memory (ROM), a hard drive, or the like.
  • Processor 130 may be coupled to each of the previously mentioned components and be configured to communicate between these components.
  • processor 130 includes a central processing unit (CPU), a network processing unit (NPU), or the like.
  • System 100 may also include graphics processing unit (GPU) 132 coupled to at least processor 130 and memory 140.
  • memory 140 is configured to be shared between processor 130 (e.g., CPU) and GPU 132, and is configured to hold data used by an application when it is run. As memory 140 is shared, it is important to use memory 140 efficiently. For example, a high memory usage by GPU 132 may negatively impact system performance.
  • shared memory or RAM can be used in a variety of ways, depending on the specific needs of the CPU and GPU. The amount of shared RAM that is available in a device can have a significant impact on its performance, as it determines how much data can be stored and accessed quickly by the CPU and GPU.
  • memory 140 is configured as a tile-based memory.
  • System 100 may also include user interface 160 and network interface 170.
  • User interface 160 may include display region 162 that is configured to display text, images, videos, rendered graphics, interactive elements, etc.
  • Display 162 may be coupled to the GPU 132 and may also be configured to display at a refresh rate of at least 24 frames per second.
  • Display region 162 may comprise a touchscreen display (e.g., in a mobile device, tablet, etc.)
  • user interface 160 may also include touch interface 164 for receiving user input (e.g., keyboard or keypad in a mobile device, laptop, or other computing devices).
  • User interface 160 may be used in real-time applications (RTAs), such as multimedia streaming, video conferencing, navigation, video games, and the like.
  • RTAs real-time applications
  • Network interface 170 may be configured to transmit and receive instructions and files (e.g., using Wi-Fi, Bluetooth, Ethernet, etc.) for graphic rendering.
  • network interface 170 may be configured to compress or down-sample images for transmission or further processing.
  • Network interface 170 may be configured to send one or more images to a server for OCR.
  • Processor 130 may be coupled to and configured to communicate between user interface 160, network interface 170, and/or other interfaces.
  • processor 130 and GPU 132 may be configured to perform steps for rendering video graphics, which can include those related to the executable instructions stored in storage 150.
  • Processor 130 may be configured to execute application instructions and generate a plurality of graphics data associated with a 3D scene including at least a first object.
  • the plurality of graphics data can include a plurality of vertex data associated with a plurality of vertices in the 3D scene (e.g., for each object).
  • GPU 132 may be configured to generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader.
  • the plurality of primitive data may include a position data and a material data.
  • GPU 132 may be configured to determine a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
  • GPU 130 may be configured to cast a primary ray in a first direction through the first pixel onto a first point of the first object. Or, the GPU 130 may be configured to skip the primary ray cast by computing the primary hit using data interpolated by rasterization.
  • the GPU 132 can also be configured to cast a secondary ray from the first point or the primary hit point within a fragment shader. Then, GPU 132 can be configured to calculate a Fresnel value associated with the first point and the primary ray in the fragment shader. Using at least the Fresnel value and the roughness value, the GPU 132 may be configured to determine whether to reflect or refract the secondary ray from the first point or the primary hit point using the fragment shader.
  • the GPU 132 may be configured to shade the first pixel covering the first point or the primary hit point based at least on the secondary ray, and the first pixel can then be stored in memory 140.
  • Other embodiments of this system include corresponding computer systems, apparatuses, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Further details of methods are discussed with reference to the following figures.
  • FIG. 2 is a simplified flow diagram illustrating conventional forward pipeline 200 for rendering video graphics.
  • forward pipeline 200 i.e., includes vertex shader 210 followed by fragment shader 220.
  • a CPU provides graphics data of a 3D scene (e.g., from memory, storage, over network, etc.) to a graphic card or a GPU.
  • vertex shader 210 transforms objects in the 3D scene from object space to screen space. This process includes projecting the geometries of the objects and breaking them down into vertices, which are then transformed and split into fragments, or pixels.
  • fragment shader 220 these pixels are shaded (e.g., colors, lighting, textures, etc.) before they are passed onto a display (e.g., screen of a smartphone, tablet, VR goggles, etc.).
  • a display e.g., screen of a smartphone, tablet, VR goggles, etc.
  • rendering effects are processed for every vertex and on every fragment in the visible scene for every light source.
  • FIG. 3 is a simplified flow diagram illustrating a conventional deferred pipeline 300 for rendering video graphics.
  • the prepass process 310 involves receiving the graphics data of the 3D scene from the CPU and generating a geometry buffer (G-buffer) with data needed for subsequent rendering passes, such as color, depth, normal, etc.
  • a ray traced reflections pass 320 involves processing the G-buffer data to determine the reflections of the scene
  • the ray traced shadows pass 330 involves processing the G-buffer data to determine the shadows of the scene.
  • a denoising pass 340 removes the noise for pixels that were ray -traced and to blend across pixels that were not ray -traced.
  • the main shading pass 350 the reflections and shadows and material evaluations are combined to produce the shaded output with each pixel color.
  • the post pass 360 the shaded output is subject to additional rendering processes such as color grading, depth of field, etc.
  • the deferred pipeline 300 reduces the total fragment count by only processing rendering effects based on unoccluded pixels. This is accomplished by breaking up the rendering process into multiple stages (i.e., passes) in which the color, depth, and normal of the objects in the 3D scene are written to separate buffers that are subsequently rendered together to produce the final rendered frame. Subsequent passes use depth values to skip rendering of occluded pixels when executing more complex lighting shaders.
  • the deferred render pipeline approach reduces the complexity of any single shader compared to the forward render pipeline approach, but having multiple rendering passes requires greater memory bandwidth, which is especially problematic for many architectures today with limited and shared memory.
  • Embodiments of the present invention provide methods and systems for graphics rendering implementing one or more optimization techniques to achieve real-time performance while minimizing memory bandwidth load.
  • the present invention provides for a method and system using noise-free unified path tracing that combines ray-traced effects in-line with real-time approximations of physically-based rendering.
  • These techniques can be performed by a GPU (such as GPU 132 of system 100 in Figure 1) configured within a graphic rendering system.
  • the rendering system can be configured as a mobile device, such as a smartphone, a tablet, VR goggles, or the like.
  • a mobile device such as a smartphone, a tablet, VR goggles, or the like.
  • the techniques described herein may be implemented separately or in combination with each other and/or other conventional techniques depending upon the application.
  • the present invention implements secondary ray optimization techniques. While primary rays render a target scene as directly perceived by the camera, secondary rays can be configured to render indirect light effects (e.g., reflections, refractions, shadows, etc.). Depending on the application, such indirect light effects may be rendered in a manner that mitigates the costs of ray casts, such as secondary ray casts, tertiary ray casts, and beyond.
  • indirect light effects e.g., reflections, refractions, shadows, etc.
  • indirect light effects may be rendered in a manner that mitigates the costs of ray casts, such as secondary ray casts, tertiary ray casts, and beyond.
  • ray cost can be mitigated by using a designated cutoff threshold that restricts the number of ray casts based on a predetermined criteria for material properties, such as a roughness cutoff based on a roughness material property.
  • This technique can be implemented to skip ray casts, but does not support configuring a ray cast to render more cheaply because it is limited given material value.
  • the present invention can include tracking a minimum and/or maximum material property value along an entire path of a ray cast and using such values to determine a desired rendering quality. For example, a maximum roughness along the entire path may be used to determine a minimum necessary rendering quality. Using such path cutoff thresholds can enable the reduction of quality to the minimum necessary for the entire path.
  • cutoff thresholds can be used for different material properties or certain properties may share the same cutoff threshold depending on the application.
  • Other material properties can include metalness, opacity, gloss, index of refraction (IOR), and the like.
  • previous material properties e.g., roughness
  • the stored material property value can be replaced with a designated cutoff threshold for a material property value (e.g., maximum roughness along a path) to skip certain rendering features, as discussed previously. In such cases, the stored maximum or minimum material property values can still be used as needed for environment map mip sampling.
  • the present invention can also use a stochastic path tracing pipeline which implements subsurface scattering in opaque objects and volumetric backscattering in transmissive objects, which can also be disabled based on designated cutoff thresholds (e.g., maximum roughness along the path indicates that processing secondary ray casts from a rough object to a smooth object is not needed for the desired quality of the final image).
  • a stochastic path tracing pipeline which implements subsurface scattering in opaque objects and volumetric backscattering in transmissive objects, which can also be disabled based on designated cutoff thresholds (e.g., maximum roughness along the path indicates that processing secondary ray casts from a rough object to a smooth object is not needed for the desired quality of the final image).
  • the color computation of the secondary ray cast can be simplified in the case of global illumination of a rough surface by using a simple opacity attenuated Lambertian diffuse calculation, or the like.
  • the new ray process can make context-aware decisions to skip advanced effects, such as subsurface scattering, reflections, refractive transmission, etc. Since the previous roughness value is previously obtained for sampling the correct level of detail of a reflection map in the case of a ray miss, there is no additional cost in ray payload footprint to overload that roughness information to adaptively skip certain rendering effects.
  • Additional ray optimizations to reduce bandwidth costs can be achieved by the use of a unified rendering architecture (e.g., using a monolithic shader).
  • the monolithic shader code implements recursive rendering functions in an in-line configuration rather than in discrete ray-tracing shaders.
  • the rendering process for certain effects can be handled by different rendering processes (e.g., rasterization, ray tracing, path tracing, etc.). Further details are discussed below.
  • the present invention implements noise-free rendering and unified path tracing in a single shader. These features can enable more flexible and context-aware tuning of individual ray casts.
  • the unified path tracing technique involves computing all rendering effects (e.g., lighting effects) in the same holistic shader. For a given 3D scene, a path tracing process can be performed in a single recursive function call that is repeated per-pixel to produce a rendered frame. Also, in-line ray tracing can be configured with such single recursive function calls to cast rays from inside a vertex, fragment, or compute shader.
  • a basic rasterization pipeline can be used to compute primary rendering data for each pixel of a rendered frame (e.g., primary hit data per-pixel).
  • the vertex shader can extract rendering properties from the 3D scene for rasterization, such as world-space position, world-space normal, world-space tangent, UV mapping, material ID, etc.
  • the fragment shader can compute direct illumination of the primary hits using the material ID and dispatching a process to compute shadowing (e.g., rayQuery call).
  • the fragment shader can also dispatch recursive compute processes (e.g., rayQuery calls) for indirect illumination of predetermined materials (e.g., glass, mirror, diffuse materials, etc.).
  • the single-shader architecture provides opportunities for data reuse across different lighting effects without incurring the bandwidth cost of transferring data between render passes (which requires accessing main memory), as is common in hybrid rendering architectures for desktop applications. By reducing the number of render passes, this architecture is particularly suitable for tile-based memory architectures commonly used in mobile devices. Also, since all effects are rendered in the same shader, it is much easier to optimize the shader code by keeping shared data for only as long as needed to complete the computation of the effects using that data.
  • effect-per-pass rendering pipelines can only enable effects globally (or globally for ray tracing as described previously), the embodiments of the present invention can effectively disable effects per-object, per-bounce based on previous roughness or other material property values.
  • shadow rendering cases while the cost is still one ray cast per pixel, the shading of that ray can be completed more quickly. For example, at the second bound of a smooth reflector or refractor, it may still be necessary to capture the material model of reflected or refracted objects, but the same level of quality as desired for shading a primary hit may not be required.
  • the rendering process can revert to rasterized shadows from secondary ray onwards, even if ray-traced shadows are enabled. This reduces the cost of shadows from one ray cast per pixel to one texture sample per pixel.
  • the noise-free rendering technique involves configuring the rendering pipeline to use deterministic ray casts.
  • the noise-free rendering pipeline is configured to support only delta lights (a.k.a. punctual lights) that are sampled deterministically. Delta lights can include directional lights, point lights, spotlights, and the like.
  • This noise-free pipeline also mitigates costs by reducing the quality of secondary rays.
  • recursive function calls for shadows e.g., rayQuery
  • the shadowing can be calculated from rasterized shadows stored in shadow maps.
  • direct illumination of objects over a range of roughness values with a roughness cut-off can be computed using a distribution function, such as a physically based microfacet bidirectional reflectance distribution function (BDRF) using a GGX distribution function, the Smith geometry shadowing term, and the Schlick approximation for the Fresnel term.
  • BDRF physically based microfacet bidirectional reflectance distribution function
  • GGX GGX distribution function
  • Smith geometry shadowing term the Schlick approximation for the Fresnel term.
  • computing indirect illumination in real-time with such distribution functions faces challenges due to the direction of the indirect light ray casts changes with the position of the point being rendered.
  • Indirect illumination of materials above a roughness threshold can be achieved by using image-based lighting, or environment maps, using a split-sum approximation.
  • the split-sum approximation uses pre-filtered environment maps and a 2D BRDF look-up table (LUT) to compute plausible reflections of the environment map in objects of varying roughness values using only two texture lookups at runtime.
  • LUT 2D BRDF look-up table
  • a separate reflection probe with the scene geometry can also be rerendered to address any geometry that might be occluding the environment map or any lights in the scene.
  • the split-sum approximation can be applied to transparency rendering. After reflecting the incident direction across the view plane, that direction can be used to sample the environment map. The same LUT values can be used to calculate the reflection of the environment map on the back face. Further, the calculated reflection can be multiplied by albedo (i.e., characteristic color of an object) and blended with a diffuse term based on opacity to be used as a rough transmission contribution.
  • albedo i.e., characteristic color of an object
  • the noise-free design achieves suitable rendering results within a single frame.
  • the rendering process can be configured for smooth reflection and transmission ray-traced effects.
  • the rendering process can include rasterization to approximate low-frequency effects, such as rough reflection, rough transmission, soft shadows, and the like.
  • the noise-free design does not require denoising or accumulation processes, the rendering process reduces the need to read intermediate rendering results from memory, which reduces the memory bandwidth load and is particularly suitable to mobile devices.
  • the present invention implements Blended Fresnel Splitting to handle joint rendering of reflection and refraction rather than managing separate passes for each of these effects.
  • rays can be cast both in the reflected and refracted directions and processed recursively.
  • this introduces an exponential per-pixel performance cost. Even limiting the maximum bounce count per pixel is not sufficient to manage the cost for mobile applications.
  • the rendering process can randomly select between the reflected and refracted direction based on the Fresnel value at the pixel.
  • this technique uses cutoff thresholds to determine whether to cast a ray or use a non-ray-tracing approximation of the ray cast result (e.g., color).
  • sample data may be measured to generate an approximation of the desired effect.
  • a reflection probe can be used to give a low-resolution cube map of an object’s immediate surroundings, which can be sampled to provide an approximation of either transmitted or reflected color.
  • cutoffs for transmission and reflection can be controlled separately or both effects can share the same cutoff.
  • the rendering method can define an additional cutoff value against which the Fresnel term at each pixel is compared to avoid casting two rays per pixel.
  • the method can include casting a reflection ray and sampling a cube map in the refracted direction.
  • the method can include casting a refraction ray and sampling the cube map (i.e., environment map) in the reflected direction.
  • the method can attenuate both the reflected and refracted ray casts based on the absolute difference between the pixel’s Fresnel term and the Fresnel cutoff. This attenuation can smooth the transition at the cutoff point at the expense of making both reflections and refractions appear more faded in the final image.
  • a glass rendering recursive process can also include following the transmitted path except in the case of total internal reflection, which maintains the one-ray -per-hit cost of Blended Fresnel Splitting but without any skybox sampling or blending.
  • Interactive-rate path tracers accumulate an image over multiple frames. As a consequence, they can randomize between reflection and refraction for every pixel in every frame and the results are smoothed by the accumulation step. However, for a real-time noise- free path tracer, both reflected and refracted images must be available from the first frame without any noise from randomization.
  • the rendering process using Blended Fresnel Splitting can handle smooth transmissive objects using one ray cast and two environment map (e.g., skybox or reflection probe) samples.
  • Figure 4 is a simplified flow diagram illustrating a noise-free unified path tracing rendering pipeline 400 according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
  • path tracing pipeline 400 includes a pre-z (depth prepass) vertex shader 410 followed by a vertex shader 420 and fragment shader 430 corresponding to the holistic noise-free render pass.
  • This rendering pipeline 400 is configured as a noise-free unified path tracer using a depth pre-pass, secondary ray optimization, and blended Fresnel splitting techniques.
  • this pipeline leverages an important performance improvement of deferred pipelines which allows occluded fragments in the subsequent holistic lighting pass to be skipped upon failing depth testing.
  • this pipeline 400 is configured to render non-primary ray casts processes using a material property cut-off threshold (e.g., maximum roughness) to adaptively disable effects per-object and per-bounce of a ray cast to achieve a desired rendering quality. Further, pipeline 400 implements Blended Fresnel Splitting to approximate the appearance of ray splitting at transmissive surfaces without casting two rays per pixel.
  • a material property cut-off threshold e.g., maximum roughness
  • the present invention provides a video graphics rendering system (such as system 100 if Figure 1) configured to implement the noise-free unified path tracer pipeline 400.
  • This is can be configured as a mobile device (e.g., smartphone, tablet, VR goggles, etc.) having at least a processor configured to perform the methods discussed previously using executable code in a memory storage with instructions to perform the actions of these methods.
  • Example method flows of the operation of such rendering systems are discussed with respect to Figures 5 and 6.
  • Figure 5 is a simplified flow diagram illustrating a method 500 for rendering refractive transparent objects in real-time video graphics using ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
  • method 500 of rendering objects can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 500 by executable code stored in a memory storage (e.g., permanent storage) of the system.
  • method 500 can include step 502 of receiving a plurality of graphics data associated with a 3D scene including at least a first object (or including all object instantiations in the scene). This can include all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data includes at least a plurality of vertex data associated with a plurality of vertices in the 3D scene.
  • the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader.
  • the plurality of primitive data can include at least a position data and a material data.
  • the method includes determining a roughness value of at least the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
  • method 500 further includes interpolating render data through the rasterization of objects visible to a camera, in place of casting primary rays. This includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object.
  • the method further includes storing a maximum roughness value across a hits path associated with the implicit primary ray.
  • the method includes casting at least a secondary ray from the first intersection position within a fragment shader.
  • the method can further include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value (i.e., secondary ray optimization).
  • the secondary ray can be cast in either a reflected or a refracted direction from the first intersection position.
  • the vertex shader and the fragment shader can be processed using a GPU, such as the GPU 132 in system 100 of Figure 1.
  • the secondary ray is cast using hardware-accelerated in-line tracing.
  • the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
  • the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
  • the determination of whether to skip, reflect, or refract the secondary ray can include comparing the Fresnel value against a predetermined threshold Fresnel value (i.e., Blended Fresnel Splitting).
  • the method includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
  • the method includes sampling an environment map to provide a consistent intermediate image used to blend between reflected and refracted images.
  • the method includes attenuating the reflected and refracted ray casts following the Blended Fresnel Splitting for joint rendering and refraction for smooth transmissive objects.
  • the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray.
  • the method includes storing at least the first pixel in a frame buffer.
  • the frame buffer is configured within a memory, such as the memory 140 in system 100 of Figure 1.
  • method 500 includes transforming the 3D scene to a screen space, such as display 162 of system 100 of Figure 1.
  • Figure 6 is a simplified flow diagram illustrating a method for generating a video with ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims.
  • One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
  • the method of generating a video can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 600 by executable code stored in a memory storage (e.g., permanent storage) of the system. As shown, method 600 can include step 602 of generating a 3D scene including at least a first object (or including all object instantiations in the scene). In an example, the 3D scene is generated by a CPU, such as the CPU 130 of system 100 in Figure 1. In step 604, the method includes receiving a plurality of graphics data associated the 3D scene. The plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene.
  • the method includes determining a roughness value of at least the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object. This data is used to compute a primary ray direction and hit position without casting a primary ray.
  • the method includes casting a secondary ray within a fragment shader from the first intersection position between the first object and the implicit primary ray determined from the rasterization.
  • the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader.
  • the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value.
  • the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A system and method for rendering video graphics. The method includes generating and receiving graphics data from a 3D scene including a first object. Primitive data can be generated by a vertex shader using vertex data of the scene. A roughness value of the first object can be determined using the vertex data and an associated rasterization. A primary ray can be inferred from interpolation of vertex data performed by rasterization, and a secondary ray can be cast from the first intersection position within a fragment shader. Using the roughness value and a calculated Fresnel value associated with the first intersection position and the implicit primary ray, the secondary ray may be reflected or refracted. Then, the first pixel can be shaded based at least on the secondary ray. Other embodiments include corresponding systems and computer programs configured to perform the actions of the methods

Description

METHODS AND SYSTEMS FOR RENDERING VIDEO GRAPHICS
BACKGROUND OF THE INVENTION
[0001] As the standards of video graphics rise each year, the resource costs of rendering such video graphics continue to rise as well. These costs are particularly important to optimize in real-time applications (RTAs), such as video games, video conferencing, and virtual reality (VR) applications. Additionally, because use of such RTAs in mobile devices has become widespread, it is increasingly desirable to improve the quality of video graphics in mobile applications. However, compared to desktop computers, mobile devices have limited memory capacity and bandwidth, which presents challenges to achieving adequate rendering performance. There are various solutions to address the memory -intensive nature of video graphics rendering, but they have been inadequate, as described below.
[0002] Therefore, new and improved systems and methods for rendering video graphics are desired.
BRIEF SUMMARY OF THE INVENTION
[0003] The present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method that utilizes noise-free rendering that optimizes secondary ray tracing. There are other embodiments as well.
[0004] Embodiments of the present invention can be implemented in conjunction with existing systems and processes. For example, a rendering system configuration and its related methods according to the present invention can be used in a wide variety of systems, including virtual reality (VR) systems, mobile devices, and the like. Additionally, various techniques according to the present invention can be adopted into existing systems via integrated circuit fabrication, operating software, and application programming interfaces (APIs). There are other benefits as well.
[0005] A system of one or more computers can be configured to perform particular operations or actions by virtue of having software, firmware, hardware, or a combination of them installed on the system that in operation causes or cause the system to perform the actions. One or more computer programs can be configured to perform particular operations or actions by virtue of including instructions that, when executed by data processing apparatus, cause the apparatus to perform the actions. One general aspect includes a method for rendering refractive transparent objects in real-time video graphics using ray tracing. The method also includes receiving a plurality of graphics data associated with a three- dimensional (3D) scene including all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene. The method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data. The method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object. The method also includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object. The method also includes casting a secondary ray from the first intersection position within a fragment shader. The method also includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene. The method also includes accessing an environment map associated with the 3D scene. The method also includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader. The method also includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value. The method also includes shading the first pixel covering the first intersection position based at least on the secondary ray. The method also includes storing the first pixel in a frame buffer. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0006] Implementations may include one or more of the following features. The secondary ray may be cast using hardware-accelerated in-line ray tracing. The method may include interpolating render data through the rasterization of objects visible to a camera, the objects including the first object. The method may include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value. The method may include storing a maximum roughness value across a hits path associated with the implicit primary ray. The method may include determining whether to skip, reflect, or refract the secondary ray by comparing the Fresnel value against a predetermined threshold Fresnel value. The secondary ray may be cast in a refracted or reflected direction from the first intersection position. The vertex shader and the fragment shader are processed using a graphics processing unit. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer- accessible medium.
[0007] One general aspect includes a system for providing rendering video graphics. The system may include permanent storage, which holds an application that includes executable instructions. The system also includes volatile memory, which holds data used by the application when it is run. The system also includes a processor coupled to the storage and the memory, the processor being configured to: execute application instructions, generate a plurality of graphics data associated with a three-dimensional (3D) scene including at least a first object, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data; determine a roughness value of the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; cast a secondary ray within a fragment shader from a first intersection position of the first object and an implicit primary ray determined from the rasterization; calculate a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader; determine whether to skip, reflect, or refract a single secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value; shade the first pixel covering the first intersection position based at least on the secondary ray if one is cast; and storing the first pixel in the memory. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0008] Implementations may include one or more of the following features. The system where the processor may include a central processing unit (CPU) and a graphics processing unit (GPU). The memory is shared by the CPU and the GPU. The memory may include a frame buffer for storing the first pixel. The system may include a display configured to display the first pixel at a refresh rate of at least 24 frames per second. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0009] One general aspect includes a method for generating a video with ray tracing. The method includes generating a three-dimensional (3D) scene made up of objects. The method also includes receiving a plurality of graphics data associated with each object in the 3D scene, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in each object. The method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data may include a position data and a material data. The method also includes determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object in order to approximate a primary ray cast. The method also includes casting a secondary ray from the first point within a fragment shader. The method also includes calculating a Fresnel value associated with the first point and the primary ray in the fragment shader. The method also includes determining whether to reflect or refract the secondary ray from the first point using the fragment shader based at least on the Fresnel value and the roughness value. The method also includes shading the first pixel covering the first point based at least on the secondary ray. Other embodiments of this aspect include corresponding computer systems, apparatus, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods.
[0010] Implementations may include one or more of the following features. The method may include accessing an environment map associated with the 3d scene. The method may include providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene. The 3D scene is generated using a central processing unit. The vertex shader and the fragment shader are processed using a graphics processing unit. The central processing unit and the graphics processing unit share a memory. The method may include transforming the 3D scene to a screen space. Implementations of the described techniques may include hardware, a method or process, or computer software on a computer-accessible medium.
[0011] It is to be appreciated that embodiments of the present invention provide many advantages over conventional techniques. Among other things, the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
[0012] The present invention achieves these benefits and others in the context of known technology. However, a further understanding of the nature and advantages of the present invention may be realized by reference to the latter portions of the specification and attached drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Figure l is a simplified diagram illustrating a system configured for rendering video graphics according to embodiments of the present invention.
[0014] Figure 2 is a simplified flow diagram illustrating a conventional forward pipeline for rendering video graphics.
[0015] Figure 3 is a simplified flow diagram illustrating a conventional hybrid pipeline for rendering video graphics.
[0016] Figure 4 is a simplified flow diagram illustrating in a rendering system according to embodiments of the present invention.
[0017] Figure 5 is a simplified flow diagram illustrating a method for rendering video graphics according to embodiments of the present invention.
[0018] Figure 6 is a simplified flow diagram illustrating a method for generating a video according to embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
[0019] The present invention is directed to graphics rendering systems and methods. According to a specific embodiment, the present invention provides a method and system that utilizes noise-free rendering that optimizes secondary ray tracing. The present invention can be configured for real-time applications (RTAs), such as video conferencing, video gaming, and virtual reality (VR). There are other embodiments as well.
[0020] Conventional techniques often involve hybrid rendering approaches with greater emphasis on desktop computing applications. These conventional hybrid approaches use a combination of traditional rasterization for primary graphics rendering and ray tracing for rendering advanced details, such as lighting. These techniques require multiple passes, each of which uses a separate screen-space render target, which is not suitable for mobile applications that have limited memory and power capacities.
[0021] It is to be appreciated that embodiments of the present invention provide many advantages over conventional techniques. Among other things, the present invention provides configurations and methods for graphics rendering systems that minimize memory bandwidth load using a noise-free unified path tracer. Additionally, the present invention implements secondary ray optimizations and Blended Fresnel Splitting to further reduce memory costs while achieving real-time rendering for mobile applications.
[0022] The following description is presented to enable one of ordinary skill in the art to make and use the invention and to incorporate it in the context of particular applications. Various modifications, as well as a variety of uses in different applications will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to a wide range of embodiments. Thus, the present invention is not intended to be limited to the embodiments presented, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
[0023] In the following detailed description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without necessarily being limited to these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
[0024] The reader’s attention is directed to all papers and documents which are filed concurrently with this specification and which are open to public inspection with this specification, and the contents of all such papers and documents are incorporated herein by reference. All the features disclosed in this specification, (including any accompanying claims, abstract, and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.
[0025] Furthermore, any element in a claim that does not explicitly state “means for” performing a specified function, or “step for” performing a specific function, is not to be interpreted as a “means” or “step” clause as specified in 35 U.S.C. Section 112, Paragraph 6. In particular, the use of “step of’ or “act of’ in the Claims herein is not intended to invoke the provisions of 35 U.S.C. 112, Paragraph 6.
[0026] Please note, if used, the labels left, right, front, back, top, bottom, forward, reverse, clockwise and counter clockwise have been used for convenience purposes only and are not intended to imply any particular fixed direction. Instead, they are used to reflect relative locations and/or directions between various portions of an object.
[0027] Figure 1 is a simplified block diagram illustrating a mobile system 100 for rendering video graphics according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0028] As shown, the mobile system 100 can be configured within housing 110 and can include camera device 120 (or other image or video capturing device), processor device 130, memory device 140 (e.g., volatile memory storage), and storage device 150 (e.g., permanent memory storage). Camera 120 can be mounted on housing 110 and be configured to capture an input image. The input image can be stored in memory 140, which may include a randomaccess memory (RAM) device, an image/video buffer device, a frame buffer, or the like. Various software, executable instructions, and files can be stored in storage device 150, which may include read-only memory (ROM), a hard drive, or the like. Processor 130 may be coupled to each of the previously mentioned components and be configured to communicate between these components.
[0029] In a specific example, processor 130 includes a central processing unit (CPU), a network processing unit (NPU), or the like. System 100 may also include graphics processing unit (GPU) 132 coupled to at least processor 130 and memory 140. In an example, memory 140 is configured to be shared between processor 130 (e.g., CPU) and GPU 132, and is configured to hold data used by an application when it is run. As memory 140 is shared, it is important to use memory 140 efficiently. For example, a high memory usage by GPU 132 may negatively impact system performance. As an example, shared memory or RAM can be used in a variety of ways, depending on the specific needs of the CPU and GPU. The amount of shared RAM that is available in a device can have a significant impact on its performance, as it determines how much data can be stored and accessed quickly by the CPU and GPU. In a specific example, memory 140 is configured as a tile-based memory.
[0030] System 100 may also include user interface 160 and network interface 170. User interface 160 may include display region 162 that is configured to display text, images, videos, rendered graphics, interactive elements, etc. Display 162 may be coupled to the GPU 132 and may also be configured to display at a refresh rate of at least 24 frames per second. Display region 162 may comprise a touchscreen display (e.g., in a mobile device, tablet, etc.) Alternatively, user interface 160 may also include touch interface 164 for receiving user input (e.g., keyboard or keypad in a mobile device, laptop, or other computing devices). User interface 160 may be used in real-time applications (RTAs), such as multimedia streaming, video conferencing, navigation, video games, and the like.
[0031] Network interface 170 may be configured to transmit and receive instructions and files (e.g., using Wi-Fi, Bluetooth, Ethernet, etc.) for graphic rendering. In a specific example, network interface 170 may be configured to compress or down-sample images for transmission or further processing. Network interface 170 may be configured to send one or more images to a server for OCR. Processor 130 may be coupled to and configured to communicate between user interface 160, network interface 170, and/or other interfaces.
[0032] In an example, processor 130 and GPU 132 may be configured to perform steps for rendering video graphics, which can include those related to the executable instructions stored in storage 150. Processor 130 may be configured to execute application instructions and generate a plurality of graphics data associated with a 3D scene including at least a first object. The plurality of graphics data can include a plurality of vertex data associated with a plurality of vertices in the 3D scene (e.g., for each object). GPU 132 may be configured to generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader. The plurality of primitive data may include a position data and a material data. Also, GPU 132 may be configured to determine a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object.
[0033] In an example, GPU 130 may be configured to cast a primary ray in a first direction through the first pixel onto a first point of the first object. Or, the GPU 130 may be configured to skip the primary ray cast by computing the primary hit using data interpolated by rasterization. The GPU 132 can also be configured to cast a secondary ray from the first point or the primary hit point within a fragment shader. Then, GPU 132 can be configured to calculate a Fresnel value associated with the first point and the primary ray in the fragment shader. Using at least the Fresnel value and the roughness value, the GPU 132 may be configured to determine whether to reflect or refract the secondary ray from the first point or the primary hit point using the fragment shader. The GPU 132 may be configured to shade the first pixel covering the first point or the primary hit point based at least on the secondary ray, and the first pixel can then be stored in memory 140. [0034] Other embodiments of this system include corresponding computer systems, apparatuses, and computer programs recorded on one or more computer storage devices, each configured to perform the actions of the methods. Further details of methods are discussed with reference to the following figures.
[0035] Figure 2 is a simplified flow diagram illustrating conventional forward pipeline 200 for rendering video graphics. As shown, forward pipeline 200 (i.e., includes vertex shader 210 followed by fragment shader 220). In a forward pipeline rendering process, a CPU provides graphics data of a 3D scene (e.g., from memory, storage, over network, etc.) to a graphic card or a GPU. In the GPU, vertex shader 210 transforms objects in the 3D scene from object space to screen space. This process includes projecting the geometries of the objects and breaking them down into vertices, which are then transformed and split into fragments, or pixels. In fragment shader 220, these pixels are shaded (e.g., colors, lighting, textures, etc.) before they are passed onto a display (e.g., screen of a smartphone, tablet, VR goggles, etc.). In lighting cases, rendering effects are processed for every vertex and on every fragment in the visible scene for every light source.
[0036] Figure 3 is a simplified flow diagram illustrating a conventional deferred pipeline 300 for rendering video graphics. Here, the prepass process 310 involves receiving the graphics data of the 3D scene from the CPU and generating a geometry buffer (G-buffer) with data needed for subsequent rendering passes, such as color, depth, normal, etc. A ray traced reflections pass 320 involves processing the G-buffer data to determine the reflections of the scene, and the ray traced shadows pass 330 involves processing the G-buffer data to determine the shadows of the scene. Then, a denoising pass 340 removes the noise for pixels that were ray -traced and to blend across pixels that were not ray -traced. In the main shading pass 350, the reflections and shadows and material evaluations are combined to produce the shaded output with each pixel color. In the post pass 360, the shaded output is subject to additional rendering processes such as color grading, depth of field, etc.
[0037] Compared to the forward pipeline 200, the deferred pipeline 300 reduces the total fragment count by only processing rendering effects based on unoccluded pixels. This is accomplished by breaking up the rendering process into multiple stages (i.e., passes) in which the color, depth, and normal of the objects in the 3D scene are written to separate buffers that are subsequently rendered together to produce the final rendered frame. Subsequent passes use depth values to skip rendering of occluded pixels when executing more complex lighting shaders. The deferred render pipeline approach reduces the complexity of any single shader compared to the forward render pipeline approach, but having multiple rendering passes requires greater memory bandwidth, which is especially problematic for many architectures today with limited and shared memory.
[0038] Embodiments of the present invention provide methods and systems for graphics rendering implementing one or more optimization techniques to achieve real-time performance while minimizing memory bandwidth load. In a specific embodiment, the present invention provides for a method and system using noise-free unified path tracing that combines ray-traced effects in-line with real-time approximations of physically-based rendering. These techniques can be performed by a GPU (such as GPU 132 of system 100 in Figure 1) configured within a graphic rendering system. The rendering system can be configured as a mobile device, such as a smartphone, a tablet, VR goggles, or the like. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, the techniques described herein may be implemented separately or in combination with each other and/or other conventional techniques depending upon the application.
[0039] According to an example, the present invention implements secondary ray optimization techniques. While primary rays render a target scene as directly perceived by the camera, secondary rays can be configured to render indirect light effects (e.g., reflections, refractions, shadows, etc.). Depending on the application, such indirect light effects may be rendered in a manner that mitigates the costs of ray casts, such as secondary ray casts, tertiary ray casts, and beyond.
[0040] In an example, ray cost can be mitigated by using a designated cutoff threshold that restricts the number of ray casts based on a predetermined criteria for material properties, such as a roughness cutoff based on a roughness material property. This technique can be implemented to skip ray casts, but does not support configuring a ray cast to render more cheaply because it is limited given material value. However, the present invention can include tracking a minimum and/or maximum material property value along an entire path of a ray cast and using such values to determine a desired rendering quality. For example, a maximum roughness along the entire path may be used to determine a minimum necessary rendering quality. Using such path cutoff thresholds can enable the reduction of quality to the minimum necessary for the entire path. Separate cutoff thresholds can be used for different material properties or certain properties may share the same cutoff threshold depending on the application. Other material properties can include metalness, opacity, gloss, index of refraction (IOR), and the like. [0041] In recursive ray tracing, previous material properties (e.g., roughness) are stored to properly sample the correct environment map mip in the case of ray misses. The stored material property value can be replaced with a designated cutoff threshold for a material property value (e.g., maximum roughness along a path) to skip certain rendering features, as discussed previously. In such cases, the stored maximum or minimum material property values can still be used as needed for environment map mip sampling. The present invention can also use a stochastic path tracing pipeline which implements subsurface scattering in opaque objects and volumetric backscattering in transmissive objects, which can also be disabled based on designated cutoff thresholds (e.g., maximum roughness along the path indicates that processing secondary ray casts from a rough object to a smooth object is not needed for the desired quality of the final image).
[0042] In an example, the color computation of the secondary ray cast can be simplified in the case of global illumination of a rough surface by using a simple opacity attenuated Lambertian diffuse calculation, or the like. By passing the roughness value of a previous hit into a new ray cast, the new ray process can make context-aware decisions to skip advanced effects, such as subsurface scattering, reflections, refractive transmission, etc. Since the previous roughness value is previously obtained for sampling the correct level of detail of a reflection map in the case of a ray miss, there is no additional cost in ray payload footprint to overload that roughness information to adaptively skip certain rendering effects.
[0043] Additional ray optimizations to reduce bandwidth costs can be achieved by the use of a unified rendering architecture (e.g., using a monolithic shader). The monolithic shader code implements recursive rendering functions in an in-line configuration rather than in discrete ray-tracing shaders. Using the cutoff thresholds discussed previously, the rendering process for certain effects can be handled by different rendering processes (e.g., rasterization, ray tracing, path tracing, etc.). Further details are discussed below.
[0044] According to an example, the present invention implements noise-free rendering and unified path tracing in a single shader. These features can enable more flexible and context-aware tuning of individual ray casts. The unified path tracing technique involves computing all rendering effects (e.g., lighting effects) in the same holistic shader. For a given 3D scene, a path tracing process can be performed in a single recursive function call that is repeated per-pixel to produce a rendered frame. Also, in-line ray tracing can be configured with such single recursive function calls to cast rays from inside a vertex, fragment, or compute shader. [0045] In an example, a basic rasterization pipeline can be used to compute primary rendering data for each pixel of a rendered frame (e.g., primary hit data per-pixel). The vertex shader can extract rendering properties from the 3D scene for rasterization, such as world-space position, world-space normal, world-space tangent, UV mapping, material ID, etc. The fragment shader can compute direct illumination of the primary hits using the material ID and dispatching a process to compute shadowing (e.g., rayQuery call). The fragment shader can also dispatch recursive compute processes (e.g., rayQuery calls) for indirect illumination of predetermined materials (e.g., glass, mirror, diffuse materials, etc.).
[0046] The single-shader architecture provides opportunities for data reuse across different lighting effects without incurring the bandwidth cost of transferring data between render passes (which requires accessing main memory), as is common in hybrid rendering architectures for desktop applications. By reducing the number of render passes, this architecture is particularly suitable for tile-based memory architectures commonly used in mobile devices. Also, since all effects are rendered in the same shader, it is much easier to optimize the shader code by keeping shared data for only as long as needed to complete the computation of the effects using that data.
[0047] While effect-per-pass rendering pipelines can only enable effects globally (or globally for ray tracing as described previously), the embodiments of the present invention can effectively disable effects per-object, per-bounce based on previous roughness or other material property values. In shadow rendering cases, while the cost is still one ray cast per pixel, the shading of that ray can be completed more quickly. For example, at the second bound of a smooth reflector or refractor, it may still be necessary to capture the material model of reflected or refracted objects, but the same level of quality as desired for shading a primary hit may not be required. The rendering process can revert to rasterized shadows from secondary ray onwards, even if ray-traced shadows are enabled. This reduces the cost of shadows from one ray cast per pixel to one texture sample per pixel.
[0048] The noise-free rendering technique involves configuring the rendering pipeline to use deterministic ray casts. In a specific example, the noise-free rendering pipeline is configured to support only delta lights (a.k.a. punctual lights) that are sampled deterministically. Delta lights can include directional lights, point lights, spotlights, and the like. This noise-free pipeline also mitigates costs by reducing the quality of secondary rays. For example, recursive function calls for shadows (e.g., rayQuery) can compute rendering with only rasterized shadows and cast only transmission rays for glass, except in the case of total internal reflection. In an example, the shadowing can be calculated from rasterized shadows stored in shadow maps.
[0049] In an example, direct illumination of objects over a range of roughness values with a roughness cut-off can be computed using a distribution function, such as a physically based microfacet bidirectional reflectance distribution function (BDRF) using a GGX distribution function, the Smith geometry shadowing term, and the Schlick approximation for the Fresnel term. On the other hand, computing indirect illumination in real-time with such distribution functions faces challenges due to the direction of the indirect light ray casts changes with the position of the point being rendered.
[0050] Indirect illumination of materials above a roughness threshold can be achieved by using image-based lighting, or environment maps, using a split-sum approximation. In an example, the split-sum approximation uses pre-filtered environment maps and a 2D BRDF look-up table (LUT) to compute plausible reflections of the environment map in objects of varying roughness values using only two texture lookups at runtime. Using such environment maps, a separate reflection probe with the scene geometry can also be rerendered to address any geometry that might be occluding the environment map or any lights in the scene.
[0051] Although previous examples are discussed in the context of rendering illumination with respect to material roughness, these techniques can also be applied to other material properties, such as transparency. In an example, the split-sum approximation can be applied to transparency rendering. After reflecting the incident direction across the view plane, that direction can be used to sample the environment map. The same LUT values can be used to calculate the reflection of the environment map on the back face. Further, the calculated reflection can be multiplied by albedo (i.e., characteristic color of an object) and blended with a diffuse term based on opacity to be used as a rough transmission contribution. Of course, there can be other variations, modifications, and alternatives.
[0052] For real-time path tracing, the noise-free design achieves suitable rendering results within a single frame. Using rays cast in deterministic, non-randomized directions, the rendering process can be configured for smooth reflection and transmission ray-traced effects. The rendering process can include rasterization to approximate low-frequency effects, such as rough reflection, rough transmission, soft shadows, and the like. Further, because the noise-free design does not require denoising or accumulation processes, the rendering process reduces the need to read intermediate rendering results from memory, which reduces the memory bandwidth load and is particularly suitable to mobile devices.
[0053] According to an example, the present invention implements Blended Fresnel Splitting to handle joint rendering of reflection and refraction rather than managing separate passes for each of these effects. In the case of rendering smooth transmissive objects, rays can be cast both in the reflected and refracted directions and processed recursively. However, this introduces an exponential per-pixel performance cost. Even limiting the maximum bounce count per pixel is not sufficient to manage the cost for mobile applications. To mitigate this cost, the rendering process can randomly select between the reflected and refracted direction based on the Fresnel value at the pixel.
[0054] In an example, this technique uses cutoff thresholds to determine whether to cast a ray or use a non-ray-tracing approximation of the ray cast result (e.g., color). In the approximation cases, sample data may be measured to generate an approximation of the desired effect. For example, a reflection probe can be used to give a low-resolution cube map of an object’s immediate surroundings, which can be sampled to provide an approximation of either transmitted or reflected color. As discussed previously, cutoffs for transmission and reflection can be controlled separately or both effects can share the same cutoff.
[0055] In the case of rendering glass, the rendering method can define an additional cutoff value against which the Fresnel term at each pixel is compared to avoid casting two rays per pixel. For Fresnel values above the cutoff, the method can include casting a reflection ray and sampling a cube map in the refracted direction. Below the cutoff, the method can include casting a refraction ray and sampling the cube map (i.e., environment map) in the reflected direction.
[0056] To mitigate a harsh visual cutoff corresponding to this Fresnel cutoff value, the method can attenuate both the reflected and refracted ray casts based on the absolute difference between the pixel’s Fresnel term and the Fresnel cutoff. This attenuation can smooth the transition at the cutoff point at the expense of making both reflections and refractions appear more faded in the final image. In a specific example, a glass rendering recursive process can also include following the transmitted path except in the case of total internal reflection, which maintains the one-ray -per-hit cost of Blended Fresnel Splitting but without any skybox sampling or blending.
[0057] Interactive-rate path tracers accumulate an image over multiple frames. As a consequence, they can randomize between reflection and refraction for every pixel in every frame and the results are smoothed by the accumulation step. However, for a real-time noise- free path tracer, both reflected and refracted images must be available from the first frame without any noise from randomization. In combination with the noise-free unified path tracer architecture and secondary ray optimizations, the rendering process using Blended Fresnel Splitting can handle smooth transmissive objects using one ray cast and two environment map (e.g., skybox or reflection probe) samples.
[0058] Figure 4 is a simplified flow diagram illustrating a noise-free unified path tracing rendering pipeline 400 according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications.
[0059] As shown, path tracing pipeline 400 includes a pre-z (depth prepass) vertex shader 410 followed by a vertex shader 420 and fragment shader 430 corresponding to the holistic noise-free render pass. This rendering pipeline 400 is configured as a noise-free unified path tracer using a depth pre-pass, secondary ray optimization, and blended Fresnel splitting techniques. By executing a minimal vertex shader 410 to output a depth buffer early, this pipeline leverages an important performance improvement of deferred pipelines which allows occluded fragments in the subsequent holistic lighting pass to be skipped upon failing depth testing. As discussed previously, all lighting and shadowing effects are performed in the single holistic fragment shader 430, and the rendered graphics are produced without denoising or accumulation by using deterministic ray casts. To reduce the cost of recursive ray tracing process calls, this pipeline 400 is configured to render non-primary ray casts processes using a material property cut-off threshold (e.g., maximum roughness) to adaptively disable effects per-object and per-bounce of a ray cast to achieve a desired rendering quality. Further, pipeline 400 implements Blended Fresnel Splitting to approximate the appearance of ray splitting at transmissive surfaces without casting two rays per pixel.
[0060] In an example, the present invention provides a video graphics rendering system (such as system 100 if Figure 1) configured to implement the noise-free unified path tracer pipeline 400. This is can be configured as a mobile device (e.g., smartphone, tablet, VR goggles, etc.) having at least a processor configured to perform the methods discussed previously using executable code in a memory storage with instructions to perform the actions of these methods. Example method flows of the operation of such rendering systems are discussed with respect to Figures 5 and 6. [0061] Figure 5 is a simplified flow diagram illustrating a method 500 for rendering refractive transparent objects in real-time video graphics using ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
[0062] According to an example, method 500 of rendering objects can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 500 by executable code stored in a memory storage (e.g., permanent storage) of the system. As shown, method 500 can include step 502 of receiving a plurality of graphics data associated with a 3D scene including at least a first object (or including all object instantiations in the scene). This can include all data necessary to determine the first object intersected by a ray cast through each pixel in the viewport, including the plurality of graphics data includes at least a plurality of vertex data associated with a plurality of vertices in the 3D scene. In an example, the method also includes generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader. The plurality of primitive data can include at least a position data and a material data.
[0063] In step 504, the method includes determining a roughness value of at least the first object visible at each pixel using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object. In a specific example, method 500 further includes interpolating render data through the rasterization of objects visible to a camera, in place of casting primary rays. This includes using rasterization to calculate both the direction of an implicit primary ray through a pixel and a first intersection position of that ray with the first object. In a specific example, the method further includes storing a maximum roughness value across a hits path associated with the implicit primary ray.
[0064] In step 506, the method includes casting at least a secondary ray from the first intersection position within a fragment shader. The method can further include determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value (i.e., secondary ray optimization). Also, the secondary ray can be cast in either a reflected or a refracted direction from the first intersection position. In an example, the vertex shader and the fragment shader can be processed using a GPU, such as the GPU 132 in system 100 of Figure 1. In a specific example, the secondary ray is cast using hardware-accelerated in-line tracing.
[0065] In step 508, the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader. In step 510, the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value. In a specific example, the determination of whether to skip, reflect, or refract the secondary ray can include comparing the Fresnel value against a predetermined threshold Fresnel value (i.e., Blended Fresnel Splitting).
[0066] In step 512, the method includes providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene. In an example, the method includes sampling an environment map to provide a consistent intermediate image used to blend between reflected and refracted images. In an example, the method includes attenuating the reflected and refracted ray casts following the Blended Fresnel Splitting for joint rendering and refraction for smooth transmissive objects.
[0067] In step 514, the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray. In step 516, the method includes storing at least the first pixel in a frame buffer. In an example, the frame buffer is configured within a memory, such as the memory 140 in system 100 of Figure 1. Further, method 500 includes transforming the 3D scene to a screen space, such as display 162 of system 100 of Figure 1.
[0068] Figure 6 is a simplified flow diagram illustrating a method for generating a video with ray tracing according to embodiments of the present invention. This diagram is merely an example, which should not unduly limit the scope of the claims. One of ordinary skill in the art would recognize many variations, alternatives, and modifications. For example, one or more steps may be added, removed, repeated, replaced, modified, rearranged, and/or overlapped, and they should not limit the scope of the claims.
[0069] According to an example, the method of generating a video can be performed by a rendering system, such as system 100 in Figure 1. More specifically, a processor of the system can be configured to perform the actions of method 600 by executable code stored in a memory storage (e.g., permanent storage) of the system. As shown, method 600 can include step 602 of generating a 3D scene including at least a first object (or including all object instantiations in the scene). In an example, the 3D scene is generated by a CPU, such as the CPU 130 of system 100 in Figure 1. In step 604, the method includes receiving a plurality of graphics data associated the 3D scene. The plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene.
[0070] In step 606, the method includes determining a roughness value of at least the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object. This data is used to compute a primary ray direction and hit position without casting a primary ray. In step 608, the method includes casting a secondary ray within a fragment shader from the first intersection position between the first object and the implicit primary ray determined from the rasterization. [0071] In step 610, the method includes calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader. In step 612, the method includes determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value. In step 614, the method includes shading at least the first pixel covering the first intersection position based at least on the secondary ray.
[0072] While the above is a full description of the specific embodiments, various modifications, alternative constructions and equivalents may be used. Therefore, the above description and illustrations should not be taken as limiting the scope of the present invention which is defined by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. A method for rendering refractive transparent objects in real-time video graphics using ray tracing, the method comprising: receiving a plurality of graphics data associated with a three-dimensional (3D) scene including at least a first object, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data comprising a position data and a material data; determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; casting a secondary ray within a fragment shader from a first intersection position between the first object and an implicit primary ray determined from the rasterization; calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader; determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value; providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene; shading the first pixel covering the first intersection position based at least on the secondary ray; and storing the first pixel in a frame buffer.
2. The method of claim 1 wherein the secondary ray is cast using hardware-accelerated in-line ray tracing.
3. The method of claim 1 further comprising interpolating a rasterization of objects visible to a camera, the objects including the first object.
4. The method of claim 1 further comprising determining whether or not to cast the secondary ray by comparing the roughness value against a predetermined threshold roughness value.
5. The method of claim 1 further comprising storing a maximum roughness value across a hits path associated with the implicit primary ray.
6. The method of claim 1 further comprising determining whether to reflect or refract the secondary ray by comparing the Fresnel value against a predetermined threshold Fresnel value.
7. The method of claim 1 wherein the secondary ray is cast in a refracted direction from the first intersection position.
8. The method of claim 1 wherein the vertex shader and the fragment shader are processed using a graphics processing unit.
9. A system for rendering video graphics, the system comprising: a storage comprising executable instructions; a memory; and a processor coupled to the storage and the memory, the processor being configured to: generate a plurality of graphics data associated with a three- dimensional (3D) scene including at least a first object, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generate a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data comprising a position data and a material data; determine a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; cast a secondary ray within a fragment shader from a first intersection position between the first object and an implicit primary ray determined from the rasterization; calculate a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader; determine whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value; shade the first pixel covering the first intersection position based at least on the secondary ray; and storing the first pixel in the memory.
10. The system of claim 9, wherein the processor comprises a central processing unit (CPU) and a graphics processing unit (GPU).
11. The system of claim 10, wherein the memory is shared by the CPU and the GPU.
12. The system of claim 10, wherein the memory comprises a frame buffer for storing the first pixel.
13. The system of claim 10 further comprising a display configured to display the first pixel at a refresh rate of at least 24 frames per second.
14. A method for generating a video with ray tracing, the method comprising: generating a three-dimensional (3D) scene including at least a first object; receiving a plurality of graphics data associated the 3D scene, the plurality of graphics data including a plurality of vertex data associated with a plurality of vertices in the 3D scene; generating a plurality of primitive data using at least the plurality of vertex data and a vertex shader, the plurality of primitive data comprising a position data and a material data; determining a roughness value of the first object using at least the plurality of vertex data and a rasterization of material properties and texture coordinates associated with at least the first object; casting a secondary ray within a fragment shader from the first intersection position between the first object and an implicit primary ray determined from the rasterization; calculating a Fresnel value associated with the first intersection position and the implicit primary ray in the fragment shader; determining whether to skip, reflect, or refract the secondary ray from the first intersection position using the fragment shader based at least on the Fresnel value and the roughness value; and shading the first pixel covering the first intersection position based at least on the secondary ray.
15. The method of claim 14 further comprising accessing an environment map associated with the 3D scene.
16. The method of claim 14 further comprising providing a visual continuity across reflected and refracted images using an environment map associated with the 3D scene.
17. The method of claim 14 wherein the 3D scene is generated using a central processing unit.
18. The method of claim 17 wherein the vertex shader and the fragment shader are processed using a graphics processing unit.
19. The method of claim 18 wherein the central processing unit and the graphics processing unit share a memory.
20. The method of claim 14 further comprising transforming the 3D scene to a screen space.
PCT/US2022/082372 2022-11-07 2022-12-23 Methods and systems for rendering video graphics Ceased WO2024102165A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202280101089.3A CN120153399A (en) 2022-11-07 2022-12-23 Method and system for rendering video graphics

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263423171P 2022-11-07 2022-11-07
US63/423,171 2022-11-07

Publications (1)

Publication Number Publication Date
WO2024102165A1 true WO2024102165A1 (en) 2024-05-16

Family

ID=91033156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/082372 Ceased WO2024102165A1 (en) 2022-11-07 2022-12-23 Methods and systems for rendering video graphics

Country Status (2)

Country Link
CN (1) CN120153399A (en)
WO (1) WO2024102165A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206008A1 (en) * 2000-02-25 2007-09-06 The Research Foundation Of The State University Of New York Apparatus and Method for Real-Time Volume Processing and Universal Three-Dimensional Rendering
US20180005432A1 (en) * 2016-06-29 2018-01-04 AR You Ready LTD. Shading Using Multiple Texture Maps
US20210287419A1 (en) * 2018-12-28 2021-09-16 Intel Corporation Speculative execution of hit and intersection shaders on programmable ray tracing architectures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070206008A1 (en) * 2000-02-25 2007-09-06 The Research Foundation Of The State University Of New York Apparatus and Method for Real-Time Volume Processing and Universal Three-Dimensional Rendering
US20180005432A1 (en) * 2016-06-29 2018-01-04 AR You Ready LTD. Shading Using Multiple Texture Maps
US20210287419A1 (en) * 2018-12-28 2021-09-16 Intel Corporation Speculative execution of hit and intersection shaders on programmable ray tracing architectures

Also Published As

Publication number Publication date
CN120153399A (en) 2025-06-13

Similar Documents

Publication Publication Date Title
US20240257439A1 (en) Reflection denoising in ray-tracing applications
Szirmay-Kalos et al. Approximate ray-tracing on the gpu with distance impostors
Kaplanyan et al. Cascaded light propagation volumes for real-time indirect illumination
Lensing et al. Instant indirect illumination for dynamic mixed reality scenes
WO2022111619A1 (en) Image processing method and related apparatus
US20100060640A1 (en) Interactive atmosphere - active environmental rendering
US8368694B2 (en) Efficient rendering of multiple frame buffers with independent ray-tracing parameters
Scherzer et al. Temporal coherence methods in real‐time rendering
US10049486B2 (en) Sparse rasterization
US20150084981A1 (en) Anti-Aliasing for Graphics Hardware
KR102006584B1 (en) Dynamic switching between rate depth testing and convex depth testing
KR20070114224A (en) Tiled, prefetched, and cached depth buffer
US20250371799A1 (en) Illumination Rendering Methods and Systems
Kilgard Improving shadows and reflections via the stencil buffer
JP5731566B2 (en) Information processing apparatus, control method, and recording medium
Engelhardt et al. Epipolar sampling for shadows and crepuscular rays in participating media with single scattering
US8872827B2 (en) Shadow softening graphics processing unit and method
Hu et al. Interactive approximate rendering of reflections, refractions, and caustics
Sintorn et al. Real-time approximate sorting for self shadowing and transparency in hair rendering
CN115298699A (en) Rendering using shadow information
US20140160124A1 (en) Visible polygon data structure and method of use thereof
US20230063422A1 (en) Techniques for rendering signed distance functions
WO2019231668A1 (en) Casting shadows using silhouettes
Boksansky et al. Ray traced shadows: maintaining real-time frame rates
WO2024102165A1 (en) Methods and systems for rendering video graphics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22965368

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202280101089.3

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 202280101089.3

Country of ref document: CN

122 Ep: pct application non-entry in european phase

Ref document number: 22965368

Country of ref document: EP

Kind code of ref document: A1