Workstations

Is Cloud-Based CAD Ready for Prime Time? Part 3

31 Jul, 2014 By: Alex Herrera

Herrera on Hardware: A new generation of server-side technologies offer high-performance, interactive computing for CAD.


When it comes to high-performance computing, where should the concentration of power reside? The first part of this series retraced the basic history of client/server computing, highlighting many compelling advantages promised by server-based computing (SBC) solutions. With so much going for the technology, one might think these solutions would be commonplace in CAD applications today — but they’re not. Yes, SBC approaches such as virtual desktop infrastructure (VDI) have found broader acceptance in mainstream corporate applications that don’t demand much in the way of complex graphics. But when it comes to serving up the rich 3D CAD content that engineers and designers create and visualize, SBC implementations are essentially nonexistent.

Why is that the case? Part two of this series explained that while the theoretical rewards are truly compelling, the technical limitations of both previous-generation hardware and network service quality — in terms of both latency and bandwidth — made it difficult and/or expensive to deliver in practice. Simply put, SBC solutions either fell short in their performance or disappointed in their versatility.

But dramatic shifts in both technology and usage models over the past decade — in particular, the explosion of visual data — have flipped the script on the old constraints. Today’s “big data” challenges have reignited interest in centralized computing approaches, buoyed by a reordered set of high-priority IT goals — including maximizing security, enabling collaboration, and opening access to computing from any Internet-enabled location.

A new generation of comprehensive hardware solutions are coming to market from the biggest suppliers of conventional workstations and servers. Exploiting fresh silicon technology from suppliers including NVIDIA, AMD, and Teradici, and garnering support from remote-hosting software vendors such as VMware, Citrix, and Microsoft, OEMs including HP, Dell, Fujitsu, and Boxx have all launched server-class products capable of delivering a 3D graphics experience comparable to what CAD users have come to expect from their deskside machines. Workstation-caliber SBC solutions are emerging in two forms: physically hosted remote workstations and virtually hosted workstations, typically deployed with VDI technology.

Remote Workstations: One User, One Remote Machine

A remote workstation provides high-demand visual computing from a Windows or Linux platform by transmitting the user's hosted physical desktop across a network to a remote display. Two accepted remote workstation computing models are in use today: the local/under-the-desk model, and the cloud/datacenter model. From a system architecture standpoint, the approaches are very similar, with both providing a ratio of one physical workstation to one user. With the local/under-the-desk model, the remote workstation is a traditional deskside workstation, typically the same workstation the remote-access user has in his or her office. The employee uses the workstation in the conventional manner at the office, and accesses it remotely when away from the office.


The two basic remote workstation models, with varying form factors.


Physically hosted remote workstations deliver many of the benefits that SBC technologies promise — such as high security and access at any time, from any location — while also offering three advantages no other SBC approach can provide. First, because each client owns all of the components of the host system, including the GPU, the user can rely on the same level of performance he would expect from his familiar deskside workstation. Second, the GPU driver in use is built from the same code base as the GPU’s native client-side driver, meaning that applications that work with the traditional workstation driver should work when running on the host. And third, machines are basically running as they would in a traditional deskside configuration, so supporting software is both simple and less expensive.

Exclusive ownership of hardware represents a double-edged sword, however. For the same reason it can offer reliable and consistent performance — dedicated physical resources — it misses out on a benefit many IT professionals see as particularly appealing in SBC: the ability to better leverage an investment in a single set of hardware resources to serve multiple users.

Virtual Workstations: Multiple Users, One Shared Remote Host

When a remote workstation user takes a break or checks e-mail, valuable compute cycles go to waste. Enter the SBC approach that today commands the most mindshare, and by a wide margin: virtual desktop infrastructure, or VDI.

With VDI, desktops are hosted virtually, not physically, meaning multiple remote clients can share a host server's resource while experiencing the same computing experience as if they were running on their own private desktop. It's not the operating system (OS) but a hypervisor that runs on the hardware (typically, in what's commonly referred to as a Type 1 hypervisor), with multiple instances of the client OS on top. Each instance of the OS, in combination with a client's specific applications, composes a virtual machine (VM). A server can host multiple VMs supporting multiple remote client desktops.


The basic difference between traditional and virtualized computing architectures. Image courtesy of Jon Peddie Research.

 


Virtualized SBC schemes can be configured to work with three basic types of graphics delivery approaches:

  • A dedicated GPU, also known as a GPU pass-through.
  • GPU sharing.
  • A virtualized GPU.

Just as the name suggests, in a dedicated GPU approach, the host server's physical GPU (or each of perhaps several installed GPUs) is dedicated to a single VM in a 1:1 connection.


The dedicated GPU, also known as the pass-through approach to graphics delivery. Image courtesy of Jon Peddie Research, adapted from NVIDIA.


The dedicated GPU’s advantage lies in the exclusivity of the GPU hardware resources — a higher-demand client "owns" the GPU and need not worry about having another client robbing performance during peak demand. But since a virtually hosted desktop is sharing the rest of the host's physical resources (e.g. CPU, memory, storage, network), it often makes more sense to share the GPU as well. The way that's accomplished primarily today is via GPU sharing: relying on virtualization software to intercept application programming interface (API) calls and trick each VM into thinking it has its own private, dedicated GPU resource.


Software-abstracted GPU sharing in a virtualized desktop model (VDI). Image courtesy of Jon Peddie Research, adapted from NVIDIA.


GPU sharing is a reasonable solution for many, but is not ideal for all. As one might anticipate, there's a performance penalty associated with relying on software traps and abstraction to accomplish virtualization. Plus, there's another, potentially more serious downside that comes with software-abstracted GPU sharing: its reliance on API translation rather than the native client driver. Translation support typically lags behind the client driver dramatically in API compatibility. For example, while a GPU driver for AMD FirePro or NVIDIA Quadro on a conventional workstation will support Open GL 4.0, API support via GPU sharing in VMware’s VDI solutions does not extend beyond Open GL 1.3. For those relying on the modern CAD applications built on more recent versions of APIs, software-abstracted GPU sharing is a nonstarter.

 


The Virtualized GPU, or vGPU

Finally, there’s a third remote graphics delivery model that can be harnessed to deliver workstation-caliber graphics performance for CAD: the virtualized GPU (vGPU). As with GPU sharing, each of a host's physical GPU devices serves multiple VMs, but the key difference in vGPU is that the virtualization is implemented in hardware rather than via software intercept and abstraction. Hardware support offers two distinct advantages: First, virtual abstraction of the GPU built directly into the hardware significantly reduces software overhead. Second, each instance of the graphics driver is equivalent to (or very close to) the native client driver, so it should be both current (in terms of API support) and robust.

Intel has been hinting at some future incarnation of hardware-based GPU virtualization, and we assume AMD is developing its own technology as well. But as of this writing, the only solution on the market offering a true vGPU implementation today is NVIDIA’s GRID technology, supported by Citrix (currently) and VMware (coming soon).


NVIDIA GRID vGPU shares one physical GPU with hardware-enabled virtualization. Image courtesy of NVIDIA.



Virtually hosted workstations. For sure, VDI promises a lot — it allows consolidation of clients in a server-side computing topology, yet applications run on the client OS they expect and perform best on. But there are tradeoffs to VDI, namely in the lack of exclusive machine ownership, increased complexity, and often cost as well.

Particularly if attempting to serve high-performance 3D computing for CAD, a VDI host server supporting a multitude of unique clients (and their images) is going to demand far more in build costs than a generic server slated for transaction or batch computing duties. Since they're shared, costs for key supporting hardware components — including memory, I/O, and storage — will inevitably climb as additional clients get thrown into the mix. And since servers housing GPUs currently represent a tiny minority of the installed base, equipping a server for a role as a workstation-caliber VDI host will mean adding a GPU, or more likely a couple of GPUs.

Lastly, there's another hardware component that a high-demand, shared graphics server is likely going to need: an image processor to encode the rendered frames for effective, low-bandwidth transmission across the network to the user's desk (or lap). Yes, encoding a video stream of varied visual types (e.g., text, 2D and 3D graphics) can be performed on the host server's CPU(s), but that additional CPU burden could mean that critical, high-demand CAD clients aren't getting adequate compute resources to run their modeling and simulation software optimally. Solutions from vendors like Teradici, HP, and NVIDIA implement encoding in hardware to offload the CPU and keep plenty of cycles available for client processing.


It takes a lot more hardware to make a server into a capable, workstation-caliber VDI host.


In the end, outfitting a capable VDI-based, workstation-caliber SBC solution probably won't cost less than equipping staff with conventional workstations ... and it could very well cost more.

Choose a Workstation-Caliber SBC Solution for Your Needs

The table is now set for SBC technology to legitimately serve high-demand, visually intensive professional applications such as CAD. The benefits are compelling, the technical tradeoffs are more favorable, and all the right names in modern computing are building and supporting capable solutions. But as this CAD in the Cloud series stated at the outset, the workstation-caliber SBC trend won't result in the end of tried-and-true client-side workstations — not by a long shot. Even the most optimistic forecasts wouldn't put a major dent in the traditional market for workstations.

For those whom a switch to an SBC approach will make sense, the optimal SBC solution will vary according to their specific needs. Consider a designer or engineer, with a well-equipped workstation at his or her desk. The heavy demands those professionals place on hardware will either keep them on traditional machines, possibly exploiting remote capabilities via the under-the-desk model, or put them on a remote workstation with dedicated resources they control outright. Less-frequent or lower-demand users, such as project management, marketing, or sales engineers, might be very well suited by a virtually hosted approach on shared server resources that lets them tap into centralized, up-to-date design databases anytime and anywhere.

Ultimately, there are two things CAD IT planners and implementers should do. First, they'll need to explore this new generation of wSBC solutions, because they are simply becoming too compelling to ignore. Vendors such as NVIDIA and Dell are creating online test drives and proof-of-concept virtualization centers to let customers try before they buy. Second, carefully weigh the tradeoffs of these new technologies and resist the temptation to force-fit the technology to the organization’s needs. What always works in the end is the right solution for the right problem, not adopting new technology simply because it's trendy or is being pushed hard by overzealous providers.


About the Author: Alex Herrera

Alex Herrera

Add comment

Note: Comments are moderated and will appear live after approval by the site moderator.

AutoCAD Tips!

Lynn Allen

Autodesk Technical Evangelist Lynn Allen guides you through a different AutoCAD feature in every edition of her popular "Circles and Lines" tutorial series. For even more AutoCAD how-to, check out Lynn's quick tips in the Cadalyst Video Gallery. Subscribe to Cadalyst's free Tips & Tools Weekly e-newsletter and we'll notify you every time a new video tip is published. All exclusively from Cadalyst!
Follow Lynn on Twitter Follow Lynn on Twitter


Poll
Which file format do you use most often for CAD drawing/model exchange?
Native format
PDF
3D PDF
DWF
STEP or IGES
JT
IFC
Other
Submit Vote