News

Ray Tracing vs. Path Tracing vs. Rasterization Explained

Rasterization is the predominant technique used for real-time rendering in video games. Despite the growing use of ray tracing for lighting, rasterization is still used to render the bulk of the scene. Rasterization is faster, efficient, and often produces near-realistic results with some caveats. Let’s look at the two rendering techniques, including their strengths and weaknesses.

What is Rasterization?

Rasterization involves transforming a 3D scene into a 2D image, rendering the objects visible from the viewport perspective, and discarding the rest. The objects in 3D space are made of triangles, which are converted into dots (pixels), colored as per the former’s textures, lighting, and other material data.

Step-by-Step Rasterization

  • 3D Scene and Triangles
    • The 3D scene is composed of objects made of triangles (called polygons).
    • Each triangle has 3D coordinates (x, y, z) relaying its exact position in the scene.
  • 3D to 2D
    • The 3D coordinates are transformed into 2D points using the screen as a POV canvas.
    • These points indicate the position of each triangle on the screen.
    • The triangles closest to the screen are drawn, while the hidden ones are culled.
  • Triangle to Pixel
    • The triangles are superimposed over pixels, called fragments at this point.
    • The fragments covering the triangles (usually 50% or more) are colored accordingly.
  • Fragment Shading
    • The final color of each pixel is calculated using the material, lighting, textures, and special effects applied to the fragment.

What is Ray Tracing?

While rasterization employs a series of clever hacks to render a 3D scene in 2D, ray-tracing tries to imitate light and its natural properties. Ray tracing involves casting a light ray through each screen pixel and sampling the intersected surface. The hit point helps determine the color of the pixel.

Contemporary ray-tracing techniques are employed to calculate the scene lighting. This hybrid ray-tracing pipeline consists of two main steps:

BVH Testing: Where did the Rays Terminate?

Bounding Volume Hierarchy helps determine the exact intersection points of light rays in a scene. It wraps the geometry in boxes called bounding volumes. The boxes are organized in a hierarchy or tree, with larger boxes packing successively smaller ones.

  • The large boxes containing successively smaller boxes are called Top Level Acceleration Structures (TLAS).
  • The boxes at the lowest level containing triangles or polygons are called Bottom Level Acceleration Structures (BLAS).
  • When a ray enters a scene, it is tested against the larger boxes.
    • Upon a hit, the smaller boxes within are tested.
    • Each hit extends the process till a triangle in the BLAS is intersected.
  • If the ray fails to hit a polygon, the pixel is (usually) rendered black.
  • The hit point determines the color of the pixel from which the ray originated.

Hit Point Shading

Once the hit point is determined, the pixel color has to be calculated. It depends on the surface properties (smoothness/roughness), texture, reflection index, transparency, and lighting:

Ray Tracing
  • Shadows and diffuse lighting are calculated by shooting a second ray from the hit point towards ambient light sources.
  • If the ray gets blocked by a surface, then the point is shadowed. Else, it’s lit.
  • Reflection rays are cast from glossy surfaces. They bounce between surfaces till they reach a light source, reflecting them at the original hit point.

What is Path Tracing?

Path tracing is considered the holy grail of 3D computer graphics. To understand its workings, we need to go back to ray tracing:

  • Ray Tracing involves casting a light ray through a pixel with the intention that it mimics real light and solves the rendering equation—along the ray.
  • However, since light rays behave infinitely complexly, the result is imperfect.

Path Tracing uses the Monte Carlo method, which uses random sampling to approximate the path and behavior of light rays:

  • Unlike ray-tracing, path-tracing casts multiple rays per pixel into the screen.
  • Upon intersection, the surface data is gathered, and the ray is redirected based on the material properties of the hit point.
  • Each bounce contributes to the final pixel color, simulating global illumination, light bleeding, and multi-bounce diffuse lighting.
  • Ray-tracing usually terminates after the first hit.
Quake 2

Each ray is sent in a slightly different direction. This is called Stochastic Sampling. The rays’ path and the lighting become more and more accurate as the ray count per pixel increases.

  • Each ray samples a possible light path using Monte Carlo randomization.
  • Using Importance Sampling, the rays are biased towards potential light sources.
  • Hence, we get more useful rays and fewer empty results.

Path Tracing with ReSTIR

Casting multiple rays per pixel is expensive. Therefore, games reuse light samples spatially and temporally to improve quality and performance. ReSTIR or Reservoir-based Spatio-Temporal Importance Resampling is one such solution:

  • Trace a ray or two through each pixel along potentially contributing light sources.
  • Store the results and their relevance in a reservoir (data structure).
  • Repeat the process with the next frame and increase the reservoir size.
  • Check neighboring pixels for high-quality samples.
  • Choose the best samples from the spatial and temporal reservoir to shade the pixels.

Areej Syed

Processors, PC gaming, and the past. I have been writing about computer hardware for over seven years with more than 5000 published articles. Started off during engineering college and haven't stopped since. Find me at HardwareTimes and PC Opset.
Back to top button