GTC 2017, Part 2: Revolutionary Uses of GPUs Hold Promise for CAD Users1 Jun, 2017 By: Alex Herrera
At the GPU Technology Conference (GTC), NVIDIA demonstrates possibilities for AI-boosted ray tracing, 3D printing, and generative design — all powered by GPUs.
In the first half of this two-part series, we took a look at the more expected advancements in 3D graphics hardware discussed at NVIDIA’s GPU Technology Conference (GTC) 2017. At this year’s event, we found compelling but evolutionary extensions in the use of that hardware — most notably, virtual reality and augmented reality (VR/AR) and remote virtual graphics processing units (GPUs) and workstations. But with NVIDIA’s rapidly expanding presence in new markets and applications, those aren’t the only impacts CAD professionals can expect from the company’s technology in the coming years.
What Volta Means for Machine Learning
At GTC, NVIDIA divulged that it has designed into the Volta, its latest-generation GPU, hardware that’s specifically engineered to accelerate machine-learning applications. Now, it’s certainly not news that NVIDIA is eager to develop artificial intelligence (AI) and machine learning markets for its GPUs; the company has been positioning its technology that way for several years. But by choosing to include these new engines, called tensor cores, NVIDIA is doubling down on that strategy. For the first time, the company is dedicating a lot of transistors and a fair amount of product cost to support AI, and pretty much only AI.
With the inclusion of tensor cores, NVIDIA is committing transistors — and cost — in Volta to accelerate machine learning. Image courtesy of NVIDIA.
In neural net–based machine learning, data types called tensors are operated on by functions, and the deeper the net, the greater the number of functions. Common functions, like convolutions, boil down to a lot of matrix math. Now, as massively parallel and programmable devices that are blisteringly fast at matrix math, NVIDIA GPUs are a natural fit to accelerate tensor processing for neural net–based learning applications. That fit is the reason NVIDIA invested so heavily in AI in the first place.
With its current GPUs, the company has found some early success in AI markets, and it’s seeing a great deal more interest building, especially from the biggest builders of datacenters: the top three cloud providers Amazon, Microsoft, and Google. Spurred by its growing string of design wins, NVIDIA is now all-in on machine learning, and nothing reflects that more than its decision to optimize the new Volta GPU for AI.
How much faster should we expect a Volta GPU with tensor cores to process a typical TensorFlow graph than its current GPU without them? NVIDIA reports a 12X gain in peak throughput, about 9X faster running actual matrix multiplication benchmarks, and ultimately about 2.4 to 3.7 times faster running a typical deep neural net (with range depending on whether the machine is training (building knowledge) or inferencing (using that knowledge to subsequently choose or judge).
Why does any of this matter to the millions of CAD users out there designing, building, simulating, verifying, and visualizing? Well, I must admit it wasn’t completely obvious to me at first, but what I saw and heard at GTC — not just from NVIDIA but from independent software vendors (ISVs), original equipment manufacturers (OEMs), and users alike — convinced me that the advent of deep-net machine-learning might have as much impact in the CAD world as any. Consider the following examples.
AI-accelerated raytracing. Those familiar with this most common technique to render photorealistic images know that the image does not appear in full fidelity in a single pass, but instead resolves over time as the engine fires rays and resolves the lighting contributions on that ray’s path throughout the scene. Unfortunately, despite today's massive computing horsepower, that process to fully resolve images requires noticeable amounts of time — seconds to minutes — to complete.
To reduce time to final quality, NVIDIA will be outfitting its iRay renderer with the intelligence to recognize the image while it’s still tracing rays (the addition is expected later this year). Once the image has suitably coalesced into a recognizable image, AI fills in remaining pixels, de-noising the image and wrapping up the time-consuming rendering process much more quickly. For example, once the iRay AI-enabled raytracer recognizes objects on an automobile’s interior, it can fill in the image pixels before the renderer would otherwise finish.
On the left, an intermediate snapshot of iRay rendering; on the right, the AI-accelerated “filled in” image. Image courtesy of NVIDIA.
But even more revolutionary uses of GPUs for CAD are going beyond the visualization stage of CAD workflows altogether. GPUs are already being used to accelerate design verification tasks, such as simulation for finite element analysis and computational fluid dynamics. And soon, designers may find another GPU-accelerated tool as compelling as any at their disposal today; GPU-accelerated AI is coming on fast, and NVIDIA and its partner OEMs and ISVs are creating some novel ways for CAD workflows to benefit.
Intelligent structural composition for 3D printing. A synergy between AI and 3D printing may not seem obvious at first, but NVIDIA and partner HP have teamed up to create an intelligent, tightly coupled, volume-based approach to optimize physical design structure for 3D printing. The process looks initially like a standard CAD-based workflow, taking a polygonal model of an object and performing the appropriate stress analysis. At that point, the surface-based polygonal model is rendered as a solid model, composed of a 3D volume of pixels, or voxels. Then, a GPU-accelerated AI algorithm analyzes the identified stress points and recommends the ideal “in-fill” composition for printing the object, balancing weight and strength while taking into account the specific methods and materials of the HP 3D printing process.
An AI-enhanced, voxel-based design flow for optimal 3D printing. Image courtesy of NVIDIA.
The AI-designed voxel-based model, and its printed realization. Image courtesy of NVIDIA.