Is Cloud-Based CAD Ready for Prime Time? Part 27 Jul, 2014 By: Alex Herrera
Herrera on Hardware: Centralized, server-side graphical computing technology is on the upswing — but will it stay that way?
Where should the concentration of power for high-performance computing reside? The first part of this series retraced the basic history of client/server computing, in which the prevailing wisdom at times pushed that concentration toward servers in the datacenter and at others swung it toward desktop workstations. Today, IT strategies are in flux again, with many headed in a very clear direction: toward server-based computing (SBC) models that pack not only computation and data on the server but — for the first time — high-performance graphics processing as well.
Manifested in a range of IT solutions tagged with hot buzzwords — such as virtual desktop infrastructure (VDI), desktops as a service (DaaS), hosted virtual desktops (HVD), and, of course, cloud computing — SBC approaches the promised IT nirvana: anytime, anywhere access enabling across-the-globe collaboration, along with the ultimate in security and ease of management. By keeping models and databases resident in a central resource, IT managers are far better equipped to manage everything in this age of "big data." And most relevant to those relying on visually intensive CAD applications, this latest thrust in SBC is promising one more feature: interactive, high-performance 3D graphics.
Past Obstacles to SBC Viability
Despite an appealing list of advantages over traditional client-side solutions, including workstations, SBC approaches today are virtually nonexistent in CAD and other professional computing spaces — and for good reason. Previous attempts to centralize data, computation, and rendering have failed to deliver the workstation-caliber experience a CAD user demands. Rather, past SBC solutions have required an unworkable compromise: Secure the benefits of centralized computing or enjoy high-performance graphics computing, but not both.
While attractive in theory, it turns out delivering high-performance interactive graphics from a single, centralized computing resource to a remote client is anything but easy. Historically, two fundamental problems have stood in the way: excessive latency and a bandwidth paradigm particularly unsuited to remote visualization.
High-performance computer graphics processing is a notorious bandwidth hog, when it comes to both rendering and displaying the 3D image — and it's that attribute that has made SBC solutions a nonstarter for graphics-intensive computing. With a conventional client-side model, only model data is transmitted from server to client (and that perhaps happens infrequently, as a designer "checks out" a model from a project database). All pixel data stays local on the client. But in an SBC model, in contrast, the server performs rendering and transmits only the resulting pixel images to the client. Model data never leaves the server.
With a client-side rendering (left), model data is transmitted, but pixel data stays local to the client. With server-based computing approaches (right), model data remains on the central server, while pixel data traverses the network. Click image to enlarge.
That fundamental difference between the two models begs an obvious question: Which is the bigger bandwidth burden, a CAD model or the images of that model? Clearly, minimizing the traffic on networks is a critical goal, particularly in the earlier stages of communications infrastructure build-out in decades past. Back in the 1990s and into the ‘00s, the answer was crystal clear: it was the pixels, and by a wide margin. Limited by computers' processing, memory, and storage, models were relatively small in detail and size.
The pixel streams, by comparison, were huge — to transmit one 1,280 x 1,024 resolution stream at 30 Hz, for example, you'd need roughly 1.2 GB/second of raw (uncompressed) sustained bandwidth. Even with the modest-quality video compression feasible at the time, moving such streams from a central resource across local-area and wide-area networks was a tall order, and one that seemed wise to avoid, given the relatively modest size of datasets that could be copied to clients instead. When it came to addressing bandwidth demands, it made a lot more sense for high-demand computing applications such as CAD to stick with a desktop workstation that did it all: computation, rendering, and display.
That overwhelming demand on bandwidth made server-side rendering an unattractive technical proposition, but the bigger roadblock to past acceptance may have been excessive latency. Have you ever experienced a videoconference where excessive lag time between speaking and being heard gets everyone talking on top of each other? Now, imagine the same annoying lag between the time you move your mouse to change the view of a CAD model and the moment that the model actually rotates on the screen.
Interactivity demands a snappy machine response. Once the "round trip" latency — the interval between user input and visual response — gets up over 150 milliseconds (ms) or so, interactivity becomes problematic, and much beyond 200 ms, the system becomes downright unusable. After combining the multiple sources of latency introduced by the elements of an SBC approach — high processing demand, slower hardware, and immature networks — getting that round trip latency down to a reasonable range has historically been very difficult or very expensive.
Running AutoCAD? Test Your Hardware! Designed to test and compare the performance of systems running AutoCAD, Cadalyst's Benchmark Test is a popular and long-time favorite.