Graphics Cards

GTC 2014: A Virtual Showcase for Tomorrow's CAD Tools

27 Mar, 2014 By: Alex Herrera

Nvidia’s GPU Technology Conference demonstrates that the computer graphics industry can never have enough applications.


The IT industry is overrun with acronyms, including some that are easily confused with each other. This month, two very similar-sounding computer graphics technology conferences — GDC and GTC — were held just one week apart in the very same convention center in San Jose, California. Although both are built around a common core of technology, the two conferences represent opposite ends of today’s computer graphics spectrum.

As its full name suggests, the Game Developer Conference is dedicated to video game development. But the conference Cadalyst readers will find more interesting is GTC — Nvidia's GPU Technology Conference, which was created specifically to advance the use of graphics processing units (GPUs) beyond gaming ... and beyond traditional GPU uses altogether.

Since its inception in 2009, the focus of GTC has been consistently clear. The company used that first gathering to promote a GPU called Fermi. While Fermi had its share of troubles — it was a little late, a little big, and a little hot (literally; it consumed too much power) — it was a pioneer: The first GPU that was specifically engineered to handle more than traditional raster graphics rendering. Fermi was built on an architecture that was geared not only to drawing polygons, but processing a new range of compute-focused applications, collectively known as general-purpose computing on GPUs, or GPGPU. With GPGPU and a new roadmap anchored by Fermi, Nvidia used that first GTC as its primary vehicle to start promoting its GPUs for a range of new uses, many of which happen to be right in the CAD industry's wheelhouse.

Fast-forward to 2014, and while the conference has grown to more than 500 sessions attended by more than 3,500 computing professionals from around the world, its focus and intent have not changed. At GTC, Nvidia’s happy to talk about GPUs for the conventional, client-side rendering uses we’re all accustomed to, but the focus is still on harnessing GPU technology for applications beyond raster graphics.

Nvidia CEO Jen-Hsun Huang made that crystal clear at the outset of his keynote, pitching the company’s latest progress in leveraging its GPUs in a range of nontraditional computing applications. And two aspects of that progress have particularly strong relevance to the CAD space: the convergence of visualization and simulation, and the extension of a workstation-caliber visual experience to the cloud. Nvidia's GPU technologies and roadmap are a boon for CAD, and in more ways than one.

Visualization and Simulation Coming Together

Right out of the chute, Huang reflected on two of the company’s core beliefs in pursuing new applications for its GPUs: First, visualization is going to continually become more computational in nature, and second, realistic visualization is far less interesting in the absence of equally realistic simulation.

Those principles apply as much to mechanical CAD and computer-aided engineering (CAE) as any other application space. The synthesis and simulation of 3D models can demand as much in the way of performance as does their visualization, if not more. One can argue that the quality and veracity of a CAE simulation are paramount, because if a simulation can’t accurately reflect the forces and dynamics at work, then the results aren’t worth looking at in the first place.

It makes sense in terms of performance as well as quality. Taken in the context of a CAD workflow, Amdahl’s Law tells us that in the typical iterative cycle — design, simulate, and render — exploiting a GPU to render lightning-fast won't do much to cut the overall time-to-complete if the requisite simulation plods along at a relative crawl. Shortening the whole workflow’s start-to-end time is what counts, and to do that, simulation performance needs a boost as well.

So what Nvidia intends by “bringing visualization and simulation together,” as Huang put it , is to leverage its GPUs to accelerate simulations in much the same way it's done for rendering. It's not a new initiative, as the company has been fostering the GPGPU evolution for years. Beyond introducing general-purpose architectural improvements in parts like Fermi, it went out on its own to create CUDA, a mature development environment designed to reveal its GPUs’ horsepower to software vendors with purposes other than conventional rendering in mind.

And those efforts have been paying off, particularly in CAD spaces. Many of the first GPGPU-on-CUDA applications to show compelling speed increases and gain real traction in the market focused on CAE. Today, engineers and designers can exploit a GPU’s teraFLOPS to accelerate, improve accuracy, and broaden the scope of increasingly finer-grained simulations. Structural solvers running on Nvidia GPUs today shorten execution times for Simulia's Abaqus (finite element analysis), MSC's Nastran, and ANSYS Mechanical (structural integrity analysis).


Rendering and more: Nvidia GPUs are engineered to accelerate multiple stages of today's CAD workflow. All images courtesy of Nvidia.


And ditto for another CAE staple chore: computational fluid dynamics (CFD). Modeling the turbulence of a fluid interacting with a rigid body? That’s just the type of computing ripe for GPGPU acceleration. Consider Autodesk's Moldflow, which taps GPU horsepower to simulate the flow of injection-molded plastic.


Computational fluid dynamics (CFD) is another GPU-ready computing chore in the CAD wheelhouse.

 

And if bringing simulation and visualization together is the ultimate goal, you can’t do any better than the case where the simulation is the visualization. Ray tracing has always represented the ultimate in photorealistic rendering for product styling. Unfortunately, it’s also been the most demanding in the way of computation, and therefore the slowest to use. Although ray-tracing algorithms render a scene, they do so in a manner that’s very different from the way GPUs typically render raster-based graphics.

Better characterized as a GPGPU application, ray tracing mimics how the rays of light in a scene travel in nature — bumping into objects and light sources, creating shadows, imparting color on each object. Nvidia’s own GPU-accelerated raytracer, iray, is integrated in several CAD applications, including RTT, Autodesk 3ds Max, and Dassault Systèmes CATIA, to name a few. Kick off a photorealistic visualization in these bread-and-butter modeling-and-viewing packages, and the Nvidia GPU in your system will automatically accelerate the rendering.


Can you tell which is the photo and which is the iray rendering? I couldn't.



Shared Server-Side GPUs: Extending VDI and Cloud Services to the Professional Ranks

What Nvidia once code-named Monterey Technology was on the GTC agenda as far back as 2010, at that time in the context of technology under development. Two years later, the company first positioned the technology as a product, introducing it as Virtual Graphics Technology, or VGX. Tack on another two years of maturation — and a new name, GRID — and Nvidia’s remote visualization technology is in 2014 poised to disrupt several markets, including CAD.

Virtual desktop technology offers enterprises a range of compelling benefits — benefits already enjoyed by many users and applications with more modest visual demands. With GRID, Nvidia is essentially aiming to deliver those proven benefits of a virtual desktop infrastructure (VDI) to an entirely new class of user: professionals who demand high-performance graphics computing.

With GRID, Nvidia is moving GPU processing from the deskside client to the backroom server, promising a high-performance, interactive visual experience, regardless of whether the server resides in the local campus data center or is part of some cloud service halfway around the world. Each host-resident virtual machine running on the host can share access to one or more server GPUs. Users at various points in product development, manufacturing, procurement, and even marketing workflows can then view and manipulate a singular, central database, rather than viewing multiple copies of the model stored on local clients. They can do so anywhere, anytime, and on virtually any device.


The GRID vGPU promises workstation-caliber VDI computing anywhere, anytime, on any device.


In the past year, Nvidia has been ramping up a supporting ecosystem for GRID, primarily with ISV partner Citrix, whose XenServer solution now supports GRID virtual GPU technology. At GTC ’14, the company bootstrapped the GRID ecosystem up substantially, with the addition of arguably the most desirable of VDI partners: Vmware. Alongside Huang, Vmware CTO Ben Fathi announced that the company’s ESX-based Horizon View VDI and DaaS (cloud oriented desktop-as-a-service) platforms would be supporting GRID vGPU technology. With key VDI ISVs on board, along with major names in server infrastructure (e.g. HP, Dell, and Fujitsu) on board, Nvidia is looking at 2014 as the breakout year for GRID technology.

Visual VDI with desktop performance sharing server-side GPU resources … sounds perfect, especially to an IT manager tasked with maintaining a robust, productive, and dynamic CAD environment. So what’s the catch? Well, those utopian predictions must be tempered by one practical concern: latency. More than any other performance issue — including frame rate and display resolution — excessive latency could turn an otherwise productive interactive visual experience into a time-wasting horror show. If you’ve ever been in a videoconference where the time lag has everyone talking over each other, you know what I mean.

 

Early demos I saw were promising, with Nvidia and partners clearly paying close attention to end-to-end latency. Still, I wanted to see it in action, without being run on any specially constructed (and potentially expensive) wide-area network (WAN). Well, I got to do just that at GTC, and I came away even more optimistic about the prospects for remote visualization for CAD professionals.

In a meeting with Nvidia’s general manager of Manufacturing Industries, Andrew Cresci, any skepticism I had about GRID's ability to support interactive visuals faded away. Andrew pulled out his laptop, accessed the San Jose conference hall Wi-Fi, and proceeded to log in to a GRID server in Tokyo. I was blown away by the snappy response, as he called up a couple of 3D models and rotated and zoomed in and out in real time. I noticed no significant lag in response as he manipulated a model located half a world away — over a basic, public network, no less. It’s no surprise that, according to Cresci, many major CAD/CAE installations (e.g., automotive) are currently evaluating GRID with trial installations.


A screenshot from an impromptu login to a Tokyo-based GRID server from San Jose, California.


Next-Generation GPU Horsepower on the Horizon


Nvidia’s technology plans sure look like they’re cut out for CAD: GPUs holistically accelerating multiple phases of a modern CAD workflow, and virtualized GPUs to extend VDI and cloud benefits to the workstation and product lifecycle management (PLM) user base. If your computing concerns revolve around CAD, that’s all good news.

But of course, while the concept and all the supporting infrastructure are essential to the final solution, ultimately the extent of the benefits will come down to good, old-fashioned GPU horsepower. Nvidia knows it, and their GPU roadmap reflects it. At GTC, the company unveiled its next generation of GPU, Pascal (taking the place of what the company had previously referred to as Volta), expected out in 2016. With Pascal, the company is promising a leap in performance per watt of nearly an order of magnitude over this year’s Maxwell (a GPU that should appear in the company’s professional-geared Quadro products in the next few months).


From Fermi to Pascal: Nvidia's GPU line over the history of the GTC.


A Strategy That Dovetails with CAD Demands


Wall Street categorizes Nvidia as a growth company, and in order to stay that way, the company knows the existing market for GPUs isn’t enough. That belief is what’s driving its investments and development initiatives, as I saw at GTC.

Fortunately, the areas Nvidia's targeting for market expansion turn out to map quite well into the needs of the CAD community. For example, its support today for convergence in simulation and visualization technology makes for a compelling, cost-effective proposition: the same GPU that renders models can now accelerate engineering analysis as well.

Tomorrow, the playing field for GPUs in CAD should only expand. After all, while many computing markets today are nearing or at the level of “good enough” performance, CAD isn’t. The thirst for compute power to improve both visualization and analysis is insatiable. And when it comes to quenching that thirst, Nvidia wants its next-generation GPUs to be the solution professionals reach for first — ideally, exceeding the demand for a next-generation CPU. With what we’ve seen at this year’s GPU Technology Conference, the company’s making substantial progress on its quest to do just that.


About the Author: Alex Herrera

Alex Herrera

Add comment

Note: Comments are moderated and will appear live after approval by the site moderator.

AutoCAD Tips!

Lynn Allen

Autodesk Technical Evangelist Lynn Allen guides you through a different AutoCAD feature in every edition of her popular "Circles and Lines" tutorial series. For even more AutoCAD how-to, check out Lynn's quick tips in the Cadalyst Video Gallery. Subscribe to Cadalyst's free Tips & Tools Weekly e-newsletter and we'll notify you every time a new video tip is published. All exclusively from Cadalyst!
Follow Lynn on Twitter Follow Lynn on Twitter


Poll
At your company, who has the most say in CAD-related software purchasing?
CAD manager
CAD users
IT personnel
vice-president/department manager
CEO/company owner
Submit Vote