Workstation Performance: Tomorrow's Possibilities (Viewpoint Column)
30 Apr, 2008 By: Simon FloydMore advances in hardware-related technology are on the horizon. What will you do with the ever-increasing power of your workstation?
Ever since digitization became commercially viable in the 1970s, one thing has remained constant: Hardware performance never seems to match our growing need to maximize productivity by working faster with less effort. This is true whether your first CAD system was MEDUSA on a Prime 2250 or Inventor 2008 on a Dell PC.
In the past, obtaining better performance was a relatively straightforward process. You simply acquired more megahertz, more RAM, or both. The decision to do so was most likely bound by financial constraints, but it was straightforward nonetheless. Today, however, purchasing the latest and greatest system doesn't always mean you're getting performance gains for your particular CAD application. Thankfully, though, there is a wealth of exciting new options on the horizon. But taking advantage of them requires an understanding of how those technologies affect the way we work and how they will interact with CAD in the future.
Predictions and Possibilities
In 1965, Intel engineer Gordon Moore made a profound statement that has held true through the decades. He predicted that the number of transistors in a silicon chip would double every two years and that evolution in technology would fuel global innovation at an exponential rate. Although Moore's Law still holds true today, how much longer will it do so and what will that mean?
The relationship between clock speed and performance has been, by far, the easiest dimension of performance to quantify: The higher the number, the faster your CAD application responded. As a result, software developers counted on it as a way of ensuring response times rather than investing time in finding other performance-enhancing approaches for CAD software. Although the trend for transistor densities has continued to steadily increase, clock speeds began slowing circa 2003 at 3 GHz. If we apply Moore's Law–type thinking to clock-speed performance, we should be able to buy at least 10 GHz CPUs. However, the fastest CPU available today is 3.80 GHz.
There's a simple reason why 10 GHz seems an improbability. Today's CPUs are power hungry (consuming 100 W is common) and consequently emit immense amounts of heat (figure 1). At the Intel Developer Forum in 2004, Pat Gelsinger, Intel's chief technology officer and senior vice-president, said that the heat emitted from modern processors, measured in power density (Watts per square centimeter), rivaled the heat of a nuclear reactor core! Better still is the fantastic notion of CPU heat approaching the temperature of the Sun beyond 2010. This sounds like pure science fiction, but some say it could become reality.
![]() Figure 1. In CPU architecture today, heat is becoming an unmanageable problem. (Courtesy of Pat Gelsinger, Intel Developer Forum, Spring 2004) |
Regardless, the heat challenge has given life to a new generation of CPUs that enticingly offer the power, and presumably the performance, of two, four, and possibly more cores. If CPU manufacturers can double the number of cores every two years, we could see 128-core CPUs by 2018. Imagine buying a personal computing device in 2018 with near-supercomputing performance!
The Digital Brain: Science or Fiction?
In the television show Battlestar Galactica, the human race of the distant future regrets developing a slave race of robots. These robots evolve to become synthetic humanoids that eventually threaten human existence as they struggle to understand their role as sentient life forms.
This scenario seems utterly fantastic in the true sense of the word. You might even say impossible: A machine thinking for itself and being capable of self-improvement (both physically and mentally) with intelligence that surpasses its human inventors isn't possible, right? Could a computer ever match human brainpower?
![]() Q&A with Simon Floyd |
The human brain is a massive network of intricately connected neurons. Interestingly, the communication path between each neuron is millions of times slower than an average CPU. If you use the old clock-speed measure for performance, you could easily believe that the CPU has a substantial advantage. After all, your computer calculates enormous complexities in a fraction of the time it takes you to find a pad and pencil. Yet computers are absolutely embryonic on the evolutionary scale. Only last year did a voice-recognition program, called Sync, hit the mainstream in some Ford vehicles. Although useful and fun, such recognition controls a relatively small set of tasks.
So how does the brain keep up at such slow speeds? It massively parallels communication between approximately a 100 billion neurons, computing complexity in a cumulative time that is superior to any computer on our planet. The interesting question is, of course, how far away we are from a similar massively parallel-computing capability in a practical form factor. Let's explore this notion.
First, CPUs are rapidly scaling with cores at slower speeds, but they're still quicker than interneuron communication. It's very possible that CPUs with thousands of cores might consume less power because they would operate at slower clock speeds. Second, CPUs can be paired in multiples within a single machine (multicore, multiprocessor). Third, each machine can, in turn, be networked into a cluster or farm of computers, effectively creating an enormous supercomputer of gargantuan scale. It therefore makes some sense that if a system could be devised on a chip in a 3D form rather than interconnected flat layers, the CPU might rival the brain's architecture.
But there is no guarantee of intelligence. Developing an intellect would require software that can take advantage of parallel computation and address bounds of memory upon which to store, change, and evolve both results and its own design. Conceptually, it seems possible, with one exception: Although hardware might one day mimic the architecture of the brain, there is no precedent for massively parallel software because coding techniques are bound by a legacy of serialized computing architecture.
Talking Parallelism
Allow me to reveal what might not be obvious: CAD applications do not run any faster. In fact, they might even be slower because multicore CPUs have generally lower clock speeds than their single-core parents. However, from a user's perspective, these CPUs have made a dramatic improvement in how people interact with computers. Switching between applications is substantially faster. Multimedia applications run flawlessly, delivering eye-popping, high-definition content. Some photo-editing applications and animation packages output images and video noticeably faster.
However, CAD applications have not capitalized on the multicore opportunity and consequently have not derived any tangible direct benefits. To take advantage of multiple CPU cores, software must thread, or task, across cores simultaneously or intelligently split tasks among cores to provide a balance of performance and interactivity. An animation application, for example, can sequence rendering of frames simultaneously to each core, effectively processing frames 1 and 2 of the sequence at the same time and then scheduling frames 3 and 4. In other words, unlike the performance benefit of increased clock speed that comes essentially for free, multicore advantages can be realized only if the software is designed accordingly (figure 2). This is the same situation for dual-processor PCs, and it is compounded if those CPUs are multicore.
![]() Figure 2. There is no free lunch for traditional software. Without highly concurrent software design, performance will remain static. |
Consequently, software developers must rethink the way they develop, test, and market CAD applications. You might recall the much-heralded hyperthreading technology. The operating system would report two CPUs when there was physically only one. This technology was largely untapped by the CAD market for the same reasons that multicore remains untapped. However, to be fair, one needs to ask this: What, practically speaking, can be computed in parallel that would make a positive effect on the way we work? I am quite certain that if a calculation could be computed in parallel, it would provide a tangible performance benefit. This is especially true in the CAD space, where complex calculations are performed with virtually every action taken.
Imagine what is possible when you suddenly have brain-like computational power and the ability to concurrently solve tasks or operations with no loss of response in your CAD application. It's not just the mathematical acceleration that can be gained — it's also the effect on the process you follow when you work. For example, would it be useful to have the ability to continue working while the complex shelling command for a plastic injection–molded component is performed at the same time? What type of work could be done while waiting? Presumably, you wouldn't do anything that was dependent on the outcome of the shelling operation!
Similarly, would it be useful to dynamically compute the energy efficiency of a new building as it is being designed, or would it just be a set of numbers that changed with every new wall and window without offering any practical benefit to the design? Would it be useful if CAD software dynamically checked compliance against government regulations as it was being used?
Those are the types of challenges that must be considered when contemplating the possibility of even greater productivity. It's not about a feature that saves you hours of work. It's about how the discipline might be affected by the advantages offered in future advances of computing performance. At worst, the industry could be at risk of succumbing to a highly marketable parallel-performance measure that makes no practical sense in terms of how we work or which initiates changes that are counterproductive or unnatural. At best, it could open a world of new opportunities that positively contribute to a practice, discipline, or domain.
Parallel computing offers enormous benefits to those who can envision their future needs and do so with an understanding of how changes can improve productivity in their particular disciplines. It's a difficult undertaking because most professional's needs are immediate. To ensure tomorrow's technologies will provide positive contributions to your personal, business, and industry performance, engage your CAD vendor today and help guide it down the right path. The new performance frontier is dependent on how you will plan to apply those technologies within your own practice.
The Quiet Achiever
It may come as a huge surprise, but a powerhouse is quietly resting in today's computers. Its capabilities are immense, yet they remain largely untapped. It measures performance in gigaFLOPs rather than gigahertz, and it brings realism to Gears of War and Crysis. It's the ultimate quiet achiever: the GPU. That's not a typo: the G is for graphics, and it has the pure grunt to offer a whopping 575 billion or more floating-point operations per second (575 GFLOPS), compared with the meager observed 50 or so GFLOPs for a dual-core CPU (figure 3).
![]() Figure 3. When comparing GPU vs. CPU capabilities, GPUs offer substantially higher performance. (Courtesy of NVIDIA) |
GPUs offer blazing performance with enormous bandwidth. They offer the ultimate opportunity for performance gains as they simply offload computations from the CPU to the GPU. It sounds relatively straightforward, but it's not an easy task. It requires software developers to find nongraphical tasks that can be processed on the GPU.
Accessing the GPU for nongraphical, or general, purposes (GPGPU) has limitations. GPUs are designed for parallel processing using a programming technique called stream processing. This technique allows data to be streamed in parallel, dramatically increasing data throughput and therefore cumulative computational speed. This is why a 600 MHz GPU can outperform a 3 GHz CPU: It can parallel operations massively in the same fashion as the human brain, as opposed to a CPU's sequential practice. As a result, software developers pick and choose which operations are parallel and which ones the GPU computes at a desired level of precision.
In our world, simulation is rife for GPU computations. Visualizing the aerodynamic performance of a plane, car, or train is entirely possible using a GPU, as are fluid flows, particle dispersions, collision outcomes, and other physics-based results. All of these types of computations can be performed in real time and offer unparalleled interactivity and user experiences. We are at the precipice of a GPU revolution that will fundamentally change what can be accomplished in CAD applications. Consider how the GPU market is growing and outperforming Moore's Law, then consider the potential outcomes if this trend continues.
We may be a few generations away from CAD applications that can fully harness the power of parallel processing on the desktop (whether its CPU, GPU, or both), but there is no time like the present to start seriously thinking about how best to take advantage of increased performance. There's also no better time to contribute your unique knowledge, however you can, to influence application design and technology development. It's how we as professionals can drive CAD innovation and foster industry progress that will help each of us build our own — and the CAD industry's — sustainable futures.
For Mold Designers! Cadalyst has an area of our site focused on technologies and resources specific to the mold design professional. Sponsored by Siemens NX. Visit the Equipped Mold Designer here!
For Architects! Cadalyst has an area of our site focused on technologies and resources specific to the building design professional. Sponsored by HP. Visit the Equipped Architect here!