GTC 2014: A Virtual Showcase for Tomorrow's CAD Tools27 Mar, 2014 By: Alex Herrera
Nvidia’s GPU Technology Conference demonstrates that the computer graphics industry can never have enough applications.
The IT industry is overrun with acronyms, including some that are easily confused with each other. This month, two very similar-sounding computer graphics technology conferences — GDC and GTC — were held just one week apart in the very same convention center in San Jose, California. Although both are built around a common core of technology, the two conferences represent opposite ends of today’s computer graphics spectrum.
As its full name suggests, the Game Developer Conference is dedicated to video game development. But the conference Cadalyst readers will find more interesting is GTC — Nvidia's GPU Technology Conference, which was created specifically to advance the use of graphics processing units (GPUs) beyond gaming ... and beyond traditional GPU uses altogether.
Since its inception in 2009, the focus of GTC has been consistently clear. The company used that first gathering to promote a GPU called Fermi. While Fermi had its share of troubles — it was a little late, a little big, and a little hot (literally; it consumed too much power) — it was a pioneer: The first GPU that was specifically engineered to handle more than traditional raster graphics rendering. Fermi was built on an architecture that was geared not only to drawing polygons, but processing a new range of compute-focused applications, collectively known as general-purpose computing on GPUs, or GPGPU. With GPGPU and a new roadmap anchored by Fermi, Nvidia used that first GTC as its primary vehicle to start promoting its GPUs for a range of new uses, many of which happen to be right in the CAD industry's wheelhouse.
Fast-forward to 2014, and while the conference has grown to more than 500 sessions attended by more than 3,500 computing professionals from around the world, its focus and intent have not changed. At GTC, Nvidia’s happy to talk about GPUs for the conventional, client-side rendering uses we’re all accustomed to, but the focus is still on harnessing GPU technology for applications beyond raster graphics.
Nvidia CEO Jen-Hsun Huang made that crystal clear at the outset of his keynote, pitching the company’s latest progress in leveraging its GPUs in a range of nontraditional computing applications. And two aspects of that progress have particularly strong relevance to the CAD space: the convergence of visualization and simulation, and the extension of a workstation-caliber visual experience to the cloud. Nvidia's GPU technologies and roadmap are a boon for CAD, and in more ways than one.
Visualization and Simulation Coming Together
Right out of the chute, Huang reflected on two of the company’s core beliefs in pursuing new applications for its GPUs: First, visualization is going to continually become more computational in nature, and second, realistic visualization is far less interesting in the absence of equally realistic simulation.
Those principles apply as much to mechanical CAD and computer-aided engineering (CAE) as any other application space. The synthesis and simulation of 3D models can demand as much in the way of performance as does their visualization, if not more. One can argue that the quality and veracity of a CAE simulation are paramount, because if a simulation can’t accurately reflect the forces and dynamics at work, then the results aren’t worth looking at in the first place.
It makes sense in terms of performance as well as quality. Taken in the context of a CAD workflow, Amdahl’s Law tells us that in the typical iterative cycle — design, simulate, and render — exploiting a GPU to render lightning-fast won't do much to cut the overall time-to-complete if the requisite simulation plods along at a relative crawl. Shortening the whole workflow’s start-to-end time is what counts, and to do that, simulation performance needs a boost as well.
So what Nvidia intends by “bringing visualization and simulation together,” as Huang put it , is to leverage its GPUs to accelerate simulations in much the same way it's done for rendering. It's not a new initiative, as the company has been fostering the GPGPU evolution for years. Beyond introducing general-purpose architectural improvements in parts like Fermi, it went out on its own to create CUDA, a mature development environment designed to reveal its GPUs’ horsepower to software vendors with purposes other than conventional rendering in mind.
And those efforts have been paying off, particularly in CAD spaces. Many of the first GPGPU-on-CUDA applications to show compelling speed increases and gain real traction in the market focused on CAE. Today, engineers and designers can exploit a GPU’s teraFLOPS to accelerate, improve accuracy, and broaden the scope of increasingly finer-grained simulations. Structural solvers running on Nvidia GPUs today shorten execution times for Simulia's Abaqus (finite element analysis), MSC's Nastran, and ANSYS Mechanical (structural integrity analysis).
Rendering and more: Nvidia GPUs are engineered to accelerate multiple stages of today's CAD workflow. All images courtesy of Nvidia.
And ditto for another CAE staple chore: computational fluid dynamics (CFD). Modeling the turbulence of a fluid interacting with a rigid body? That’s just the type of computing ripe for GPGPU acceleration. Consider Autodesk's Moldflow, which taps GPU horsepower to simulate the flow of injection-molded plastic.
Computational fluid dynamics (CFD) is another GPU-ready computing chore in the CAD wheelhouse.
In her easy-to-follow, friendly style, long-time Cadalyst contributing editor Lynn Allen guides you through a new feature or time-saving trick in every episode of her popular AutoCAD Video Tips. Subscribe to the free Cadalyst Video Picks newsletter, and we'll notify you every time a new video tip is published. All exclusively from Cadalyst!