cadalyst
Design Visualization

Event Report: NVISION 08, NVIDIA's Inaugural Visual Computing Event

28 Aug, 2008

NVIDIA's CUDA swims into CPU territory.


Usually, when Forbes publishes a feature story about an industry titan, its cover shows someone's head and torso against a plain background. In August 2007, it was diamond merchant Laurence Graff in a chalk-striped gray suit. In May 2008, it was casino moguls Frank and Lorenzo Fertitta in suits and open-collar shirts. But in January 2008, when the magazine announced the company of the year, its art director abandoned this template. Both the cover and the inside page depicted a CEO in a T-shirt, blasting an alien creature with a laser gun. The subject's heroic pose and the explosion of light could easily pass for a sci-fi movie poster. The man on the cover was Jen-Hsun Huang, cofounder and CEO of NVIDIA.

This week, Forbes' poster child for technology had a coming-out party in San Jose, California. NVISION 08 (August 25-27), billed as "the visual revolution," was an inaugural event hosted by the graphics wizard NVIDIA. It brought together gamers, designers, filmmakers, and software developers who are always on the lookout for solutions that can make their digital monsters more frightening or virtual automobiles more realistic.

Click for larger image
NVIDIA's CEO Jen-Hsun Huang takes aim at CPU giants by advocating the use of GPUs for computer-intensive tasks (image courtesy of Forbes and NVIDIA, from January 08 Forbes article naming NVIDIA company of the year). (Click image for a larger version)
During his keynote speech on Monday, August 25, at the Center for Performing Arts, Huang brandished NVIDIA's latest weapon, CUDA (short for Compute Unified Device Architecture), embedded in the company's GeForce, Tesla, and Quadro cards. CUDA is a C-language programming environment. So theoretically, CUDA allows the graphics processing unit (GPU) to do some of the computing-intensive tasks commonly relegated to the central processing unit (CPU). Who's Huang taking aim at with CUDA? CPU makers like Intel are certainly in his crosshairs.

Not Just a Graphics Processor
CUDA, according to NVIDIA, "enables programmers and developers to write software to solve complex computational problems in a fraction of the time by tapping into the many-core parallel processing power of GPUs. ... Thousands of software programmers are already using the free CUDA software tools to accelerate applications -- from video and audio encoding to oil and gas exploration, product design, medical imaging, and scientific research."

The company points out CUDA's strength comes from interoperability with OpenGL and DirectX and support for both 32- and 64-bit Windows and Linux operating systems. Whereas Windows dominates the PC market, open-source Linux has a good penetration in the high-performance computing (HPC) market. The emergence of programmable GPUs can encourage some companies to build HPC clusters that rely more on GPUs instead of CPUs.

"Few technologies have made the leap that the GPU has made over the last several years," Huang said. "When we started the company, the GPU was really just an accelerator. Over the years, that graphics processor has become a general purpose parallel computer. When it started out, it understood only graphics languages -- Direct3D and OpenGL. The GPU today understands C and C++, the language of computing."

The Future is Not Soon Enough
Among the real-time visualization features made possible by the GPU's advance is ray tracing, or simulation of realistic light behaviors on surfaces. Companies like Autodesk, which has been advocating the use of digital prototypes for the tasks once served by clay models, will undoubtedly seek ways to squeeze every drop of processing power out of the GPU.

For some, rewriting software code to take advantage of the CUDA's parallel computing architecture might take some time. A number of companies exhibiting at NVISION 08, like artificial solutions provider Massive Software and enterprise CAD exchange platform developer Anark, acknowledge they anticipate performance increases in their products with CUDA, but they've just begun working with NVIDIA on this front, so neither was prepared to release details.

Manifold, a geospatial software developer headquartered in Nevada, claims bragging rights as "the first GIS ever to support massively parallel computing using hundreds of stream processors via NVIDIA CUDA technology." According to Dimitri Rotow, product manager for Manifold, it took his company only about two months to rewrite and test the products to realize the benefits promised by CUDA.

"With the CUDA configuration, calculations that previously took 20 minutes to complete are now done in 30 seconds. Moreover, calculations that previously took 30 to 40 seconds are now real time. It is no exaggeration to say that, at least for our industry, NVIDIA CUDA technology could be the most revolutionary development in computing since the invention of the microprocessor," Rotow said.

 

Click for larger image
Manifold, a geospatial software maker, claims to have shipped the first GIS program to tap into CUDA's multi-core processing power. (Click image for a larger version)

The Battle Has Just Begun
Don't expect Intel to sit by idly while NVIDIA makes a move on its territory on the motherboard. At SIGGRAPH 08, just several weeks before NVISION 08, the chipmaker presented a paper on what it described as the "first-ever forthcoming many-core blueprint or architecture code-named Larrabee," expected to debut in 2009 or 2010.

According to Intel, Larrabee is a new breed of processor featuring "a new approach to the software rendering 3D pipeline, a many-core (many processor engines in a product) programming model, and performance analysis for several applications."

Larrabee will initially target the personal computer graphics market, but if it lives up to its promise, hardware and software makers serving the professional market will certainly embrace it.

Intel's paper prompted a public response from NVIDIA. In the document titled "A Viewpoint from NVIDIA" distributed to the press, the company wrote, "Intel claims that the X86 base of Larrabee makes it seamless for developers. But with conflicting statements coming from Intel themselves on whether or not there will be a new programming model or not, there are several important questions: Will apps written for today's Intel CPUs run unmodified on Larrabee? Will apps written for Larrabee run unmodified on today's Intel multicore CPUs? The SIMD [Single Instruction, Multiple Data] part of Larrabee is different from Intel's CPUs, so won't that create compatibility problems?"

When I contacted HP, Autodesk, and SolidWorks to inquire about their preparedness to take advantage of Larrabee, all declined to comment. Perhaps that's understandable. The wisest course of action might be to wait until the battle is over, then declare one's allegiance to the victor.

 

Click for larger image
At its CUDA Zone portal, NVIDIA showcases many applications of the CUDA software tools. (Click image for a larger version)