With New Turing, NVIDIA Doubles Down on the Future of Real-Time Ray-Tracing18 Oct, 2018 By: Alex Herrera
Herrera on Hardware: CAD professionals are expected to reap the benefits.
Pricey to Start, but Give it Time
Announced in conjunction with the unveiling of Turing (but expected to ship later this year) were the first Turing-based Quadro brand add-in card GPUs: the Quadro RTX 8000, Quadro RTX 6000, and Quadro RTX 5000. Now, the first thing you might notice here is the addition of the 8000-level model. Prior generations maxed out at 6000, and the RTX 6000 and RTX 5000 hit the same rough expected price points as their predecessors, the P6000 and P5000. But the RTX 8000 ups the pricing ante dramatically, setting new ground at around $10,000.
Specifications for the first round of high-end NVIDIA Quadro GPUs leveraging the new Turing technology. (Source: Nvidia)
Don't let the pricing scare you, as NVIDIA plans at least two lower-cost derivatives of the initial TU102 Turing chip driving the top two models. The TU104 will be somewhat less expensive, as reflected in the price of the RTX 5000. And a TU106 chip in the offing will get further down the price curve to trigger (I would speculate) an RTX 4000 and possibly an RTX 2000.
The Turing-based Quadro RTX 8000 establishes a new high-water mark — in both price and performance — for a Quadro-brand product. (Source: NVIDIA)
How will Turing Help CAD — and When?
Of course, innovative hardware is pointless without application support. To provide hooks to Turing's ray-trace acceleration and build critical-mass support among software developers, NVIDIA created RTX middleware. It sits on Volta and Turing hardware only, exposing capabilities to popular ray-tracer platforms Microsoft DirectX Raytracing (DXR) and NVIDIA Vulkan and OptiX. RTX appears an elegantly structured stack, providing (according to NVIDIA's diagram) seamless and independent access to any of the processing pipe flavors that Turing offers: conventional 3D rasterization and compute, as well as new ray-tracing and AI pipes.
NVIDIA RTX middleware exposes its Turing hardware and AI acceleration to applications. (Source: NVIDIA)
Which applications are most likely to adopt NVIDIA's Turing-accelerated, RTX-accessed ray-tracing in the short term? Well, beyond the obvious answer of rendering-specific applications such as V-Ray, logical prospects would include gaming and professional graphics — the same ones that demand high-performance conventional 3D graphics today. Design software providers are smack dab in the crosshairs, with NVIDIA claiming partners Adobe, Autodesk, Dassault Systèmes, and Siemens PLM Software as committing to adopting RTX.
Developers of several key design/engineering applications have committed to NVIDIA RTX. (Source: NVIDIA)
Tapping RTX will be Autodesk's Arnold ray-tracer, supported in both Maya and 3ds Max, the latter being more popular among AEC and manufacturing types for virtual prototyping and marketing. Similarly, Adobe announced support in its Dimension renderer, claiming a 10X performance boost (presumably over the previous-generation Pascal GPU). Allegorithmic reports an 8X boost when using its Substance ray-tracer. Altair's recently acquired Thea renderer is also on board with RTX, expecting wide use among AEC and product design professionals. Dassault will be integrating RTX support in both SOLIDWORKS and CATIA to provide users a quicker, more photorealistic look and feel. Even Ansys, more focused on simulation than modeling, will incorporate RTX into its VRXPERIENCE tool to speed virtual prototyping (at a rate it sees as 4X faster than previously).
Hybrid Rendering to Assist in the Transition to Ray-Tracing
A key benefit of RTX's parallel-pipe architecture is the ability to gracefully combine ray-traced rendering with tried-and-true 3D raster graphics on the same scene. The new ray-tracing pipe and the existing 3D raster pipes are not mutually exclusive, but can be mixed at the discretion of the application. With Turing and RTX, for example, an application could choose ray-tracing for detailed characters and objects in foreground focus, leaving conventional 3D graphics to process peripheral scenery, distant objects, and simple backgrounds.
RTX access to independent ray-trace and raster pipelines allows for graceful balancing of quality vs. performance, and paves a transition path from raster to ray-tracing. (Source: NVIDIA)
Planned Synergy, Serendipity, or Both?
Synergy or serendipity, call it what you like, but the fact that one of NVIDIA's GPU detours in machine learning has yielded a dramatic speed-up for ray-tracing is fitting and fortunate. NVIDIA's new darling market, machine learning, is finding compelling applications in virtually every corner of the computing landscape, including its core gaming and workstation applications. For the latter, consider machine intelligence used to generate the internal composition of objects to be 3D printed, taking into account the printing materials to create the optimal structure to balance weight, materials, and strength. Or digging deeper into CAD workflows, AI is and will be taking on some of the burden in designing the form and function of the object itself, something Autodesk and SOLIDWORKS are pursuing. And the value of AI to more quickly and thoroughly analyze imagery in 2D, 3D, and even 4D (that is, 3D over time) has obvious and compelling value in geoscience, surveillance, and medical applications.
NVIDIA's now-legitimate claim that machine learning benefits 3D rendering makes it easier for the company to unify its GPU products, with Turing being the lead example. However, it's worth noting that doubling down on Tensor Core–accelerated ray-tracing with RT Cores in Turing is a bet not without short-term risk for NVIDIA. While Turing's combined inferencing and ray-tracing aptitude will offer some immediate payoff, it will take some time for ray-tracing to more significantly penetrate the company's key graphics markets. NVIDIA would appear to have its ducks in a row to ramp up support for RTX, particularly by supporting hybrid rendering options to give vendors a more graceful transition path. Still, broad and heavy use of RTX will likely take some time to build, as these types of transitions typically need both multiple development cycles and iterative bootstrapping: More use encourages more support, which encourages more use, and so on. Dassault, for example, is targeting mid-2019 for RTX support in CATIA, roughly a year after the introduction of Turing and the first Quadro RTX product to market.
The prospects in the long term are far less risky. The industry will likely follow a steady path toward a complete transition to ray-tracing, simply because virtually everyone prefers ray-traced images over raster graphics. Quantifying "long term" is guesswork, and surely it be many years to full adoption, but there's nothing but time standing in the way. In the end, NVIDIA will in all likelihood have entrenched its position even more deeply as the leader in 3D computer graphics, while opening all kinds of doors in compute acceleration and artificial intelligence.