cadalyst
Hardware

Hardware Benchmarks for CAD, Part 1: Value, Limitations, and a Strategy for Use

1 Jan, 1970 By: Alex Herrera

Herrera on Hardware: Benchmarks remain the best performance evaluation tools, but not all benchmarks are created equal.


For those whose daily productivity depends on how fast their workstation can model, simulate, visualize, and render, searching out the most capable hardware to handle those tasks can be a rewarding pursuit. But what's the best way to assess and compare the performance of GPUs and workstations for CAD workflows? There is no one precise way. Rather, it becomes an exercise in triangulating among several datapoints, while always keeping in mind the context and characteristics of your specific application, workflow, and compute loads, as well as other criteria that might matter, like price and future-focused features. Though not perfect, benchmarks remain the best performance evaluation tools … but not all benchmarks are created equal, nor can they validate the same conclusions.

Are Spec Sheets and Performance Numbers Enough?

You might be wondering, why not just check the vendor spec sheets and performance numbers? Datasheets supplied by GPU and workstation vendors aren’t usually the easiest to navigate or exploit. Consider a GPU’s ratings for FLOPS or graphics memory bandwidth, or a CPU’s specs of core counts and GHz ratings. Unfortunately, there's a fundamental problem with virtually all of these metrics: they tend to reflect theoretical hardware limits, indicating performance levels that can be reached in very specific circumstances, most of which are not realistic when running real-world applications.

Each of these numbers taken alone means nothing, unless all of the other salient architectural metrics can deliver to similarly capable levels. For example, consider a couple of hypothetical CPUs: model A from Intel that offers 8 cores running at (nominal) 4.3 GHz, and one from AMD that offers 12 cores running at (nominal) 4.0 GHz. There are no valid conclusions to draw from those numbers by themselves, without knowing answers to a few questions: How comparable are the respective core microarchitectures? How threaded are your critical workloads? How much time do you spend running that workload versus, say, modeling?

Comparing raw GPU metrics is no more concise. Where GPU A’s array of internal computing engines might promise X TFLOPS (trillion floating-point operations per second) of peak computing throughput, in may only utilize a small fraction of those FLOPS if the instruction stream and the rest of the architecture (e.g., GPU's input/output, instruction pipeline, and memory subsystems) can't supply those floating-point engines with data fast enough.

For example, take a look at the following table detailing some of the typical raw specs for three actual GPUs in the market today. GPU A offers both more local memory bandwidth and maximum TFLOPS of throughput than GPU B or C. The immediate conclusion that A offers better performance than B or C would, however, be a spurious one. Because, as the Viewperf benchmark results following indicate (and more on Viewperf ahead), A does not provide better 3D graphics performance — in fact, it actually trails B and C by a fair amount.


Raw specs on three current, unnamed GPUs that don’t track relative performance.



Despite GPU C’s higher raw specs, it’s outclassed by both GPU A and GPU B in the Viewperf benchmark (results normalized to GPU C).

So appropriately deducing the kind of performance you might expect for your workflow from the raw specs on product tables is at best speculative, and at worst worthless. Hence, the need for some type of benchmarking to better ascertain how those raw specs translate to the real throughput you’d see over the course of your computing day.

Vendors know just as well the limitations of calling out raw specs, so to help guide buyers most will offer up their own claims of performance on certain workloads, supported by in-house (or occasionally paid third-party) benchmarking. Now while there certainly have been instances of outright deceptive performance claims in this industry, those represent the minority, and calling vendors’ marketing figures dishonest is generally overstating the case. But that doesn’t mean you’re getting an objective assessment of that product relative to others, because on the flip side, a vendor will always pitch performance measures that put their wares in the best light. So whether it’s dishonest or not (and again, usually not), a vendor’s claims typically won’t tell the whole story, and may need to be taken with more than a few grains of salt.

SPEC: The Benchmark for Professional Computing Benchmarking

The ideal benchmark would be one that’s customized to your workflow and project data: one that walks through every sequence of your day (and perhaps multiple days), based on exactly what you do, what data you do it to, and how you do it. That would mean essentially creating your own benchmarking, including both representative content and series of application commands to visualize and operate on that content.

Obviously, that’s not what the vast majority of us would do (though a big enterprise buyer of hundreds of workstations just might). For most, the next best thing is to rely on unbiased, third-party benchmarks that best emulate the workloads of CAD designers, engineers, architects, and builders, of which I recommend two: those from the Standard Performance Evaluation Corporation (SPEC), and those from the independent software vendors (ISVs) that develop and support your workhorse CAD applications (the caveat being that the latter may or may not exist, depending on the application).

SPEC is the longtime, independent torchbearer for workstation-caliber benchmarking, governing three sets of benchmarks that apply specifically to highly visual, professional-caliber applications: Viewperf, SPECapc, and SPECwpc. Each of these focuses on performance from a different angle: Viewperf provides a solid assessment of your system/GPU’s 3D graphics capabilities; SPECapc runs one specific application with common tasks and content to best simulate real-world computing; and SPECwpc runs a series of focused tests designed to individually stress all the key workstation subsystems to reflect their relative capabilities. All SPECapc, SPECviewperf and SPECworkstation benchmarks are available for free downloading to everyone except vendors of computers and related products and services that are not members of the SPEC Graphics and Workstation Performance Group (SPEC/GWPG).

Editor's Note: Click here to read "Hardware Benchmarks for CAD, Part 2: Uses of SPEC and ISV Benchmarks."


About the Author: Alex Herrera

Alex Herrera




Download Cadalyst Magazine Special Edition