Graphics Cards

How Much GPU Memory Do You Need for CAD?

30 Apr, 2014 By: Alex Herrera

Herrera on Hardware: To answer this question, start by learning how the technology affects your workflow.

Editor’s Note: We’re delighted to introduce this new column to the Cadalyst lineup. In “Herrera on Hardware,” consultant and Cadalyst Contributing Editor Alex Herrera will explain everything that CAD managers and users need to know about hardware for CAD, from memory to hard drives to processors — and beyond. Look for “Herrera on Hardware” every month on the Cadalyst website, as well as in future issues of Cadalyst magazine.

We constantly marvel at what the progression in silicon technology has delivered over the years, with each successive generation bringing much greater capability for far fewer dollars. Nowhere is that progress more apparent than in the geometric growth of random access memory (RAM) — and not just in system memory, but in the dedicated memory tied to graphics processing units (GPUs).

Growth in GPU memory has been staggering: If you were to take the size of a typical desktop computer's system memory from the early 1990s and multiply by a thousand, you'd be in the ballpark of what a typical workstation-caliber GPU can boast as its own private graphics memory today. Professional GPU card memories start around 0.5 gigabytes (GB or Gbytes) in size and head up from there, all the way up to Nvidia's top-of-the-line 12-GB Quadro K6000 and AMD's recently introduced 16 GB FirePro W9100.

With such a huge variety of GPU memory sizes available, at prices ranging from $100 to $5,000, the obvious question facing CAD professionals is, "How much do I need?" There's no one-size-fits-all answer; it depends on what type of work, and what type of content, your typical day involves. But first, let’s look at how the technology is applied.

A Brief History of GPU Memory Uses

What use does a GPU's private memory serve? Not long ago, it had but one primary purpose: serving as a local buffer for raster-based graphics engines to assemble each frame in memory prior to display. For that reason, graphics or video memory has been historically and equivalently referred to as framebuffer. But that's no longer an appropriate designation, as the composition of GPU memory has changed dramatically in the past twenty years.

In the early 1990s, technology limits and cost constraints meant most graphics controllers were limited to just a few megabytes (MB) of storage. A PC graphics card may have populated only one or two MB, and a high-end workstation card may have been outfitted with eight or perhaps a very generous 16 MB. Back then, screen resolutions of 1024x768 or 1280x1024 meant the vast majority of memory was consumed by framebuffer. Off-screen memory tended to be minimal, storing things like glyphs (e.g., text patterns) and 2D block patterns that would be copied on screen.

Fast-forward to today, and the allocation of GPU memory is reversed. With memory density now orders of magnitude higher, and screen resolution growth over the same period modest by comparison, framebuffer is now the minority consumer of GPU memory. Though some professional segments are migrating to 4K resolution, for the majority of professional CAD users, 1920x1080 per screen is the norm. Even factoring in two or three displays on the desktop, and double or triple buffering (for animation and/or stereo 3D), off-screen storage now represents the bulk of available memory — by far.

Comparison of typical graphics memory allocation in 1992 and 2014.

A Big Effect on CAD Performance

So if GPU memories are now orders of magnitude larger than what's required for even the most generous allocation of framebuffer, does that mean the rest of that memory is going to waste? Not at all. In fact, all that extra available off-screen memory is what can make or break performance for the high-demand visual computing common in today's CAD workflows.

Professional applications now rely heavily on off-screen memory for storing — and having high-bandwidth, low-latency access to — a multitude of data types. Two of the biggest consumers are 3D model data and textures used to add detail to the model's surfaces. In oil and gas exploration, for example, a model of survey samples can yield data sets that push well beyond 1 TB in size. In the making of the seminal CGI film Avatar, artists wrapped so many layers of high-resolution textures on a Na’vi character, it took roughly 150 GB in all to bring a single character to life.

CAD applications don't usually venture into either of those extremes, as product design, engineering simulations, or building information modeling (BIM) datasets don't typically demand as much in the way of high-resolution textures or 3D volumes. Still, the performance of CAD applications is sensitive to memory size (and bandwidth), to a degree that varies dramatically according to the size of the models being rendered or simulated.


However, depending on how many GB we're talking about, the merits of a large memory footprint aren't always visible in the CAD-relevant benchmarks we look at to get an indication of a GPU's performance level. One benchmark that does stress GPU memory — to a point — is SPEC's Viewperf 12 benchmark. Viewperf renders several viewsets, and two of them (specific to Dassault Systèmes CATIA and PTC Creo) can chew up around 2 GB of GPU memory. So while Viewperf may modestly punish cards with less than 2 GB, it won't particularly reward GPUs with physical memory sizes in excess of 2 GB.

That doesn't mean a card with 3 GB or more won't deliver performance benefits, because as the model size grows, so does the consumption of GPU memory. And while running out of physical GPU memory shouldn't be catastrophic in the blue-screen sense (GPU drivers are designed to adapt when storage demands outpace capacity), it's generally a performance killer. When it comes to GPU processing, executing out of local memory delivers performance that’s orders of magnitude higher than executing piecemeal out of system memory (i.e., multiple copies of data subsets to GPU memory) — and that will directly translate to a major slowdown in rendering throughput.

Accordingly, the larger and more complex your CAD models are, the bigger your GPU's memory should be. Where a typical consumer product might only require a few hundred MB of local storage, a model of a car or aircraft could consume 8 GB or more. So while an entry- to mid-range GPU with 1 or 2 GB may be enough for the bulk of CAD use, there are always pockets of users that can benefit from more, and a few that want the most possible, period.

Ranges of typical CAD model sizes today. (Size does not include frame buffers, textures, or any other ancillary data.) Data sources: Cadalyst, Nvidia, and AMD.

Another Reason for Larger GPU Memory Sizes: GPGPU

Raster-based graphics rendering is no longer the only thing occupying GPU execution cycles and memory. General-purpose computing on GPUs, or GPGPU, represents a growing usage model, leveraging GPUs' prowess in highly parallel, floating-point intensive math. More than a handful of the most compelling uses of GPGPU computing apply to CAD and CAE.

Take raytraced renderers, for example, which are highly appealing on the styling side of the product design workflow. A raytracer prefers access to an entire scene’s dataset, fetching the appropriate object description as the ray bounces around the scene. In the worst case, local GPU memory just isn’t big enough to store all the scene’s data structures, and rendering must be unceremoniously kicked back to the host for vastly slower processing in software — no graceful degradation here.

GPU Memory Rules of Thumb

If you're pushing the envelope on model complexity, you're probably well aware of the heavy demands on all aspects of your computing hardware, GPUs included. And you're going to opt for a higher-end GPU with four, eight, or more GB of local memory. But for the majority of CAD users today, a card that offers around 2 GB of GPU memory and falls in the $200 to $500 range will hit the sweet spot in terms of both price and capacity.

Still, it's a decision you'll want to weigh in the context of tomorrow’s needs as well. Because even if your GPU gives you all you need for today's CAD project, you can’t expect things to stay that way. Inevitably, the nature of this competitive business demands you do more on the next project — create broader-scope models with finer-grained simulations and higher-quality rendering. It's one individual part today, but will it be an entire assembly tomorrow? As with all IT buying decisions, consider GPU performance that will scale into a future filled with more complex models, projects, and workflows.

About the Author: Alex Herrera

Alex Herrera

Add comment

Note: Comments are moderated and will appear live after approval by the site moderator.

AutoCAD Tips!

Lynn Allen

Autodesk Technical Evangelist Lynn Allen guides you through a different AutoCAD feature in every edition of her popular "Circles and Lines" tutorial series. For even more AutoCAD how-to, check out Lynn's quick tips in the Cadalyst Video Gallery. Subscribe to Cadalyst's free Tips & Tools Weekly e-newsletter and we'll notify you every time a new video tip is published. All exclusively from Cadalyst!
Follow Lynn on Twitter Follow Lynn on Twitter

At your company, who has the most say in CAD-related software purchasing?
CAD manager
CAD users
IT personnel
vice-president/department manager
CEO/company owner
Submit Vote