GTC 2020 Reveals How Nvidia and Partners Plan to Leverage GPUs to Advance CAD15 Apr, 2020 By: Alex Herrera
Herrera on Hardware: The annual conference — which was transformed this year from an in-person event to GTC Digital — made it clear that traditional 3D graphics is becoming a smaller piece of the puzzle.
It’s become a rite of spring: Every March we’ve come to expect Nvidia to put on its best dress, not only ready to impress its own customers, but hopefully to get all stakeholders invested in similar technologies and markets to follow suit. Once again, the GPU Technology Conference (GTC) went on as scheduled, but it looked nothing like it ever has. Amid the COVID-19 pandemic, Nvidia scrambled to transform the event — which typically hosts tens of thousands at the San Jose Convention Center — into a digital event streamed worldwide.
Although the format changed dramatically, as did the duration (Nvidia opted to release presentations, training sessions, and panels over a multiweek period), the ultimate goal did not. And as always, the conference yielded a range of new disclosures, proof points, and case studies smack-dab in the wheelhouse of today’s CAD professionals. Most notably, mined from this year’s event are more than a few nuggets of particular interest to CAD in the areas of 3D rendering, virtual reality, and graphics processing unit (GPU)-accelerated computing.
Rendering for CAD
For all intents and purposes, rendering today equates to ray tracing. By emulating the true physical properties of lights and materials, ray tracing yields photorealism that conventional raster-based 3D graphics can’t achieve (at least, not without a lot of cumbersome and time-consuming optimizations to approximate global illumination).
When it comes to photorealism, raster graphics can’t match a physically based rendering technology like ray tracing. Image source: Autodesk.
Now, all ray-traced rendering is not created equal, as implementations can vary a bit algorithmically and by level of detail. But the biggest differentiator today is not how it’s computed, but where: on the CPU exclusively, or on the GPU (primarily with some CPU handholding). The two camps have historically been separated by usage, both in terms of quality and throughput: GPU-accelerated for real-time, interactive use that delivers very good quality, and CPU for single-render, multi-playback use that promises the best possible quality (i.e., “final frame” studio film caliber).
Today, the line between those two camps remains generally the same, but the gap between them is diminishing. Because while performance on the CPU side has improved from generation to generation, those incremental gains have paled in comparison to what GPUs have managed to achieve in both performance and quality, particularly with the 2018 introduction of Nvidia’s RTX technology. First seeded in the company’s Volta generation and later unveiled in full fruition with the more broadly marketed Turing GPU, RTX is specifically architected to speed up ray-traced rendering, primarily via two means: accelerated ray-casting (via dedicated hardware RT Cores) and image denoising (via machine learning accelerated on hardware Tensor Cores). (Read more about Turing and RTX here.) While “real-time performance” remains subjective (precisely how many frames per second are we talking, and at what level of detail, resolution, and quality?), I think it’s fair to accept Nvidia’s argument that RTX GPUs are the first that can legitimately claim to offer real-time ray-tracing.
The pace of ray-tracing performance on GPUs has increased significantly with NVIDIA RTX technology. Image source: Autodesk, based on OptiX performance.
While the appeal of ray tracing has long been established, what is new is its economy, both on client workstations as well as in the cloud. No longer does it require tens or hundreds of thousands of dollars invested in a private render farm or uber-configured server. With a few thousand dollars spent on a capable RTX-equipped client, real-time (or close) ray tracing is achievable on your workstation, and free-to-cheap with cycles rented or borrowed on the cloud.
Enabling access to those options, of course, falls primarily on your independent software vendors (ISVs) and their ability to deliver them in your go-to application. That’s happening, and fast: Autodesk (Arnold and VRED), Dassault Systémes (SOLIDWORKS Visualize and CATIA Live), ESI (IC.IDO), Siemens (NX Ray Traced Studio) and Adobe (Dimension) are out in front, to name a few.
Think about rendering in the automotive space, and virtual prototyping for a car’s lines and style will pop to mind. But the dial up in quality of ray-traced rendering over conventional raster offers value in more subtle ways than producing hero images of a high-powered sedan cruising the highway.
Consider ESI’s example of comparing the process of stamping and polishing a car door prototype: Many vendors have gone through the expensive, time-consuming process of creating the physical door to ensure there were no visual artifacts that might adversely affect its appeal. Instead, ESI’s client went the virtual route, producing no physical prototype at all, but creating a ray-traced model of the door for review. The virtual process proved to be just as capable, for example identifying an unexpected visual anomaly around a door handle, where an unsuspecting eye might mistake reflections at a certain angle for a dent (definitely a bad visual cue for customers approaching the vehicle in a showroom).
A high-detail ray-traced digital prototype that delivers the same pre-production benefits of a more costly physical sample. Image source: ESI.
1 2 3
About the Author: Alex Herrera
For Mold Designers! Cadalyst has an area of our site focused on technologies and resources specific to the mold design professional. Sponsored by Siemens NX. Visit the Equipped Mold Designer here!
For Architects! Cadalyst has an area of our site focused on technologies and resources specific to the building design professional. Sponsored by HP. Visit the Equipped Architect here!