NVIDIA’s Long Game: The Evolution from GPU Maker to AI Computing Company, Part 1
22 Sep, 2017 By: Cyrena Respini-IrwinTony Paikeday discusses how NVIDIA laid the technological groundwork to become an AI computing company — and why.
A few years ago, it became apparent that NVIDIA was planning a multifaceted future for graphics processing units (GPUs) that went far beyond graphics. Today, NVIDIA is actively developing applications of GPUs in areas ranging from supercomputing to autonomous driving to machine learning and artificial intelligence (AI) — and has even begun calling itself “the AI computing company.”
Tony Paikeday, NVIDIA’s director of product marketing for Artificial Intelligence and Deep Learning, explained that the seeds of the technology initiatives we’re seeing today were planted a decade or more ago — before NVIDIA’s annual GPU Technology Conference (GTC) first began, and long before CAD professionals stood to benefit from AI-accelerated raytracing, AI-enhanced optimization of structures for 3D printing, or AI-based generative design.
The Intelligent Industrial Revolution
“The landscape that we’ve been in has gone through a lot of inflection points or transitions,” Paikeday noted. He sees similarities between the “intelligent industrial revolution,” as NVIDIA CEO Jensen Huang calls it, and other periods when revolutionary technologies reshaped the modern world.
First was the arrival of the Internet era, when disruption was created by the proliferation of low-cost computing and high-bandwidth connectivity. “At that time, it wasn’t hard to imagine an Internet-connected PC in every home, transforming our daily lives … as a developer, you’d be crazy not to be writing software products that ran on Windows PCs.”
A decade later, we entered the iPhone era. “That was also disruptive, putting that powerful experience in our pocket,” Paikeday observed. And now, “this next era being built on AI computing will have even broader reaching, more pervasive impact.”
And that impact is not something reserved for the far-off future; deep learning powered by GPUs has already made its way into a vast array of functions and services, such as those offered by Google and Amazon, that are integrated in our everyday lives. “It’s so pervasive, in personal assistants, recommendation engines, and all this,” Paikeday said.
With an estimated 3,000 AI-focused startups as of late last year, and an IDC prediction that worldwide spending on AI will grow to $47 billion by 2020, it’s clear that a substantial portion of the market shares Paikeday’s enthusiasm.
CUDA Can-Do
The story started with CUDA, a parallel computing platform and programming model that lets developers access the capabilities of the GPU (the name comes from compute unified device architecture). “We introduced it ten years back, to help our developers use GPUs for much more than graphics rendering when we realized that the same computing power that can put a pixel on a screen is also great at math — especially the kind of math used in really complex models and arithmetic involving matrices,” explained Paikeday.
The company had realized that its customers didn’t want to learn a new language or new application programming interface (API) to access that computing power. The answer, said Paikeday, was to provide an easy solution “that would have you learning how to use GPU computing really insanely quick, like in hours or even minutes.” With CUDA, software developers could adapt their existing programs to offload computational work to the thousands of compute cores in a GPU, “in a way that made the GPU a really valuable complement to the CPU,” he noted.
NVIDIA now provides a CUDA Toolkit that comprises GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler, and a runtime library. Extensions are available for well-known languages such as Fortran and Python. Additional support includes “getting started” resources, optimization guides, illustrative examples, and a developer community.
As a result of its efforts to reduce the barriers to entry, NVIDIA saw what Paikeday termed “an explosion of developer interest,” with the GPU developer network growing by 11 times in the past five years to 511,000. Last year, the company logged more than 1 million downloads of CUDA.
The Long Road to AI
So how did NVIDIA become “deeply invested in the AI space” at same time the company was building its name as a visual computing provider? Following the dual trajectories of visual computing and large-scale mathematical compute prepared NVIDIA to be in the right place at the right time when AI — especially AI powered by GPUs — really took off. Paikeday referenced the advice that hockey legend Wayne Gretzky espoused: “Our story is one about seeing where the puck is going, making a bet, and then investing diligently over an extended period of time before it explodes … and AI, which has been enabled by what we call GPU computing, is this transformation that we’ve taken.”
In this case, skating to where the puck is going to be required not just a software platform, but optimized hardware architecture. “[NVIDIA sought to] offer the marketplace fully integrated systems, purpose-built for AI. So we developed and introduced … the DGX portfolio of supercomputers, and those are basically built on specially engineered versions of the most popular deep learning frameworks and CUDA and cuDNN [the CUDA Deep Neural Network library] and other libraries that we’ve created, all tightly integrated and optimized to run deep learning.”
Editor’s note: Click here to read NVIDIA’s Long Game Takes GPU from Graphics Workhorse to AI Powerhouse, Part 2
For Mold Designers! Cadalyst has an area of our site focused on technologies and resources specific to the mold design professional. Sponsored by Siemens NX. Visit the Equipped Mold Designer here!
For Architects! Cadalyst has an area of our site focused on technologies and resources specific to the building design professional. Sponsored by HP. Visit the Equipped Architect here!