cadalyst
Cadalyst

CAD Tech News (#128)

4 Jun, 2020 By: Cadalyst Staff


Herrera on Hardware: Can 1-to-1 Remote Workstations Provide the Same Performance as Local Machines?

Are you concerned about latency with a remote computing solution? You'll want to evaluate whether response time and image quality meet your expectations or are noticeable enough to be an issue.

By Alex Herrera

Editor's Note: Click here to read Part 1 of this article, "Boxx Expands into Remote Workstations with Help from Cirrascale."

Boxx Technologies' acquisition of Cirrascale and subsequent launch of Boxx Cloud Services was not only prescient in its timing, but unique in what it's brought to the rapidly expanding cloud computing ecosystem. As explored in the first part of this article, Boxx Cloud Services is one of the most recent providers of remote desktop hosting solutions — a launch that dovetailed with the world's urgently renewed interest in remote computing, triggered by the COVID-19 crisis.

While it's not first to the cloud computing party, its offerings are anything but copies of hosted desktops available from names like Amazon Web Services and Microsoft Azure. Boxx Cloud Services' for-rent workstations offer one-to-one dedicated hosted machines — not just comparable to their traditional deskbound machines, but identical. Forget slower-clocked server-optimized CPUs and the shared memory, storage, and GPU resources of a virtualized cloud platform; Boxx Cloud Services workstations would represent the top end in performance (including overclocked CPUs), were they packaged and sold as deskside towers.

Verifying the Premise of Identical Performance

Now, while it's theoretically solid to argue that the system throughput of the remote machine should essentially match the identically configured local machine, I (with Boxx's help) went ahead and benchmarked anyway. We ran SPECwpc 3.0.4's Product Development (focusing on common CAD compute and visual workloads), General Operations, and GPU Compute test suites. The results supported the theory, no surprise, as five composite results for workloads stressing CPU, graphics, storage, and GPU compute showed tight tracking between systems.

Differences were extremely small — in the noise — with the exception of 3D graphics performance. Overall, graphics ran about 5% slower on the remote machine, a result with an understandable explanation: PCoIP does chew up a bit of overhead, most notably in encoding the desktop screen as a video stream for return transmission. By default, PCoIP Client Software has to both perform that encoding in software and interrupt GPU graphics processing to fetch frames from video memory, the combination of which could logically account for a 5% hit. The good news is that PCoIP Client Software now also supports hardware video encoding on Nvidia RTX–class GPUs, further offloading the CPU and reducing that penalty (though this is a remedy I did not test).

No surprise, the same workstation produces essentially the same throughput (when tested with the SPECwpc 3.0.4 benchmark), no matter where it is.
No surprise, the same workstation produces essentially the same throughput (when tested with the SPECwpc 3.0.4 benchmark), no matter where it is.

Network — Perhaps Especially Latency — Is the Most Important Performance Consideration

Using SPECwpc to test that two essentially identical machines can deliver the same throughput is not a particularly interesting exercise or revealing comparison (with the exception of quantifying that modest and explainable graphics performance penalty). We're talking about the same Boxx model, just in one case, with that machine next to your desk, and in the other, in a rack somewhere else. Rather, when we're comparing using a local workstation under the desk to using the same machine located in a remote datacenter, we need to consider how well the network — both local-area and wide-area networks (LAN and WAN) between you and the remote workstation — can support both the display of your desktop screen and interactivity. Essentially, that comes down to bandwidth and latency. With respect to bandwidth, the network will be burdened with the additional bandwidth required to transport at least one (and more likely, two or three) FullHD-resolution (again, at least) encoded streams from datacenter to client. Thankfully, with the robust improvement in available bandwidth of mainstream LAN technologies and WAN providers, bandwidth is arguably the lesser issue of concern, as modern Internet access today will more than likely suffice in the vast majority of small business and home offices. Read more »

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Alex Herrera is a consultant focusing on high-performance graphics and workstations.

What Is a DPU? The Third Pillar of Computing, Says NVIDIA

During his "kitchen keynote" for GTC Digital, NVIDIA CEO Jensen Huang introduces data processing units (DPUs), unveils the A100 GPU, and welcomes early adopters into the Omniverse.

By Cadalyst Staff

Although NVIDIA's annual GPU Technology Conference (GTC) — originally planned as an in-person event in San Jose, California — is being held in an online-only format this year, some things haven't changed. The sight of CEO Jensen Huang in his trademark black leather jacket was reassuringly familiar, even though Huang delivered his portions of the keynote from his kitchen instead of the usual stage.

The GPU — short for graphics processing unit — is also very familiar to the GTC crowd, but Huang also discussed a less well-known acronym: DPU, for data processing unit. "This is going to represent one of the three major pillars of computing going forward: The CPU for general-purpose computing, the GPU for accelerated computing, and the DPU, which moves data around the data center and does data processing," Huang said.

Huang explained how the need for such technology has arisen: "Over the past several years, two fundamentally new dynamics have happened to take accelerated computing to the next level. The first is the emergence of this new type of algorithm called data-driven or machine-learning algorithms: Data processing, and the movement of data around the data center, is more important than ever. The second is, the applications we're processing now are so large [that they don't] fit in any computer. No server, no matter how powerful, [is] able to possibly process the type of application workloads that we're now looking at. In fact, the server is no longer the computing unit — the data center is the new computing unit.

"With software-defined data centers, and application developers able to write applications that run in the entire data center, it is important now for us to think about optimizing across the entire end-to-end of a data center, from networking and storage to computing — for us to optimize the entire stack, top to bottom," he continued. "To be able to optimize at the data center scale is NVIDIA's new approach, and I believe that in the next decade, data center–scale computing will be the norm and data centers will be the fundamental computing unit." Read more »

WHAT’S NEW FROM CADALYST

Vendors Alter Plans and Policies to Support CAD Community During COVID-19 Crisis
The companies that supply designers and engineers are changing license terms to support working from home, and taking in-person gatherings online to help keep attendees safe. Read more »

Dell Overhauls Precision Workstation Lineup with New and Redesigned Models
New mobile lineup features smaller footprints and improved thermal management; new tower options include an ultra-small form factor design. Read more »

Sponsored: Road and Bridge Digital Twins in Action — Four Case Studies
Digital twins can help road agencies integrate multiple siloed data sources, track and visualize change, produce actionable insights, and more. This overview highlights examples of digital twin applications ranging from roadways to an emergency bridge replacement. Read more »

CAD Manager Column: CAD Management Vision — From Good to Great, Part 1
Whether your workplace is tolerant of errors or trained on excellence, Jim Collins's book has helpful advice for all CAD managers who are seeking greatness. Read more »

 


About the Author: Cadalyst Staff

Cadalyst Staff


More News and Resources from Cadalyst Partners

Cadalyst Benchmark Test

Running AutoCAD? Test Your Hardware! Designed to test and compare the performance of systems running AutoCAD, Cadalyst's Benchmark Test is a popular and long-time favorite.

Learn more and download the Cadalyst Benchmark Test.


Take the Gradual Path to BIMWhite Paper: Take the Gradual Path to BIM. Implementing building information modeling (BIM) can be a daunting challenge, but you don't have to take on everything at once. Learn how small, incremental advances can yield big benefits over time. Download your paper today!


Discover DraftSight(R) - the capabilities of Autocad(c) and more at the right price. DraftSight is a 2D and 3D CAD solution that can create, edit, view and markup any DWG file with greater ease, speed and efficiency. Learn more here.