cadalyst
Graphics Cards

GTC 2020 Reveals How Nvidia and Partners Plan to Leverage GPUs to Advance CAD

15 Apr, 2020 By: Alex Herrera

Herrera on Hardware: The annual conference — which was transformed this year from an in-person event to GTC Digital — made it clear that traditional 3D graphics is becoming a smaller piece of the puzzle.


 

Measuring cost isn’t just a financial exercise, of course, and cutting time is just as appealing as trimming dollars. With VR, Mortensen managed to reduce build time for the initial mock-up from 10 days down to just seven. And worth noting is that neither the additional expenses nor delays included the impact of any necessary travel for project stakeholders, who can always participate remotely.

No doubt, despite its early history of immature technology leading to unfulfilled promises, as one architectural presenter put it: “The days of pilot projects have come and gone, and virtual reality is definitely here to stay.”

Beyond the Visuals: Accelerating Compute and Machine Learning Applications with GPUs

The days of the GPU being used and judged exclusively based on its ability to create 3D imagery — be it via conventional raster graphics (the application for which it was originally intended) or rendering — are long gone. Originally harnessed by academics looking to exploit levels of parallel floating-point computation well beyond what a CPU could offer, the modern GPU has undergone several generations of improvement in its ability to perform all kinds of non-visual computing applications. That’s resulted in adoption not only for compute-heavy tasks like engineering simulations, but more recently to support machine learning for benefits the CAD world is just beginning to discover.

The GPU is a natural compute engine to leverage for parallelizable, floating-point intensive engineering simulations, like computational fluid dynamics (CFD), finite element analysis (FEA), and its close cousin, the discrete element method (DEM). Applications such as ANSYS Mechanical have long supported GPU acceleration to speed throughput by several factors over CPU-only implementations.

That thread of evolution continues, improving over time in performance and efficiency, with several vendors and researchers at GTC sharing new innovation and case studies. One team composed of Arup (an AEC consulting firm), Zenotech, and the University of London shared their CFD case study, assessing how fast, how accurately, and how affordably their zCFD solver could capture the wind’s complex, micro-scale fluid behavior within the overall flow around a stadium.


GPUs accelerating the fine detail in air flow and turbulence around a stadium as a function of wind direction and speed. Image source: Arup / Zenotech.

Specifically, the team examined the dynamic air pressure around a single windward stadium tile, comparing CFD predictions against flow and turbulence characteristics measured experimentally with a fine array of sensors in a properly equipped wind tunnel. The study resulted in several takeaways: First, GPU-based simulation provided the fastest means to get to useful results, and by a large margin, as Nvidia’s recent Volta generation (V100 class) took a big step forward in throughput over both a many-core CPU and previous-generation Kepler (K80 class). Second, that CFD simulation (GPU or otherwise) provides an effective and essential means to minimize risks and rework before committing to physical construction, but with a caveat: No cutting corners on the workload, because achieving acceptable and insightful accuracy can demand fine-grained detail; skimp and you may end up with misleading results.



Third, the case study also illustrates how compute-hungry a simulation like CFD can be. The sheer magnitude of this CFD workload is mind-boggling. The job took eight Tesla V100s, which provided an aggregate 56 TFLOPS of peak double-precision throughput and 128 GB of memory, a total of 160 hours to compute just 0.7 seconds of the simulation. And as the team pointed out, spending more cycles would return the benefit of more accurate predicted results. No doubt, for as far out as we can possibly see, applications like CFD will consume as many FLOPS (floating point operations per second) — whether they’re CPU or GPU supplied — as the industry can possibly produce.

And lastly, some good news for those with the need for such detailed fine-grained simulation, in the form of the cloud. While few can afford or justify the expense of purchasing all the hardware above, more providers like Amazon Web Services, Microsoft Azure, and Google Cloud are making machine instances with such hefty specs available for rent by the hour.

GPU Service to CAD Predicted to Push Further Beyond 3D Graphics

It’s been a long, bootstrapped process, but GPUs are now firmly established as compute engines whose domain extends well beyond the interactive 3D graphics that fueled their rise. Vendors and buyers will continue to weigh and exploit the value of the GPU for graphics, but as its other applications grow in stature, traditional 3D graphics prowess will continue to shrink as a criterion.

Perhaps the two most powerful trends pushing the GPU in that direction are rendering and machine learning. Both may end up being as ubiquitous in usage — or eventually even eclipsing — graphics as the GPU’s primary raison d’etre. Consider ray tracing: Nvidia’s very conscious and consequential decision to bet firmly on the technology’s future will likely prove the inflection point in 3D visualization, the point when a meaningful transition from raster graphics to ray tracing began. Ray tracing’s slow rise has had nothing to do with its quality, which is superior to raster graphics in virtually every respect, but rather with its performance. As performance capabilities continue to improve over time, especially with big leaps forward like RTX, there’s no reason ray tracing won’t continue to eat away at raster graphics use over time. Yesterday, ray tracing was a boutique and expensive way to create maximum-quality images only when they were an absolute priority. Someday down the road, this technology will be the norm.

Also evolving is where those GPU cycles are being deployed and consumed. Yes, conventional client workstations are increasingly harnessing the GPU for machine learning, rendering, and especially virtual reality (where interactivity is critical), but cloud vendors in particular will be a key driving force behind the GPU’s other uses. Providers big and small are seeing a more compelling value proposition for investing in datacenter GPUs, as they can be allocated to serve — dynamically, it’s worth emphasizing — far more than just one application.

Add it all up, and one day we may find ourselves struggling to remember what that “G” in GPU ever stood for.

 

1 2 3 


About the Author: Alex Herrera

Alex Herrera




Download Cadalyst Magazine Special Edition