cadalyst
VR/AR/MR

Virtual Reality and Augmented Reality for CAD, Part 3

18 Jul, 2019 By: Alex Herrera

Herrera on Hardware: If you want to dial up your VR environment for CAD, explore options that extend beyond baseline systems, including no-compromise performance upgrades and immersive display solutions.


Virtual reality — the immersive and disbelief-suspending experience of a purely or partly synthetic world — offers particular appeal in CAD. As explored in Part 1 of this series, applications from manufacturing to architecture and construction can realize undeniable benefits from the ability to see, manipulate, and navigate big-budget designs like cars, planes, and buildings in rich interactive detail well before any dollars are sunk into physical design. In Part 2, we looked at the baseline set of hardware components that yield a VR solution that is cost-effective and can produce an impactful and useful VR experience for both designers and clients.

In this third and final installment, we’ll examine some of the more effective ways to dial up your hardware to deliver anywhere from a better to best-possible VR solution. In lieu of any disclaimers to the contrary, the reader can assume commentary applies to augmented reality (AR) applications in addition to VR.

Taking It Up a Notch … or Two

In Part 2 of this series, we introduced vendors’ logo programs — such as NVIDIA’s VR Ready — which have been created to give prospective VR adopters solid baseline requirements for hardware that can deliver a productive and credible VR experience. Upgrading hardware from that baseline focuses primarily on what are arguably the two most important links in the VR chain: the graphics subsystem that creates the images and the interactive displays that deliver them, especially (but not limited to) HMDs.

VR shares notable synergies with two other key GPU technologies: raytracing and multi-GPU technology. Both dovetail with VR’s paramount goal to create the most credible imagery to trick — or at least please — the human visual system. When it comes to the most photorealistic CGI, raytracing rules the roost. Based in the physical properties of the real world — lighting and materials in particular — raytracing is perfectly suited to suspend disbelief, both with respect to what the eye may be focusing on in the scene, and the more subliminal but just as critical cues like shadows, reflection, and refraction. Like VR, raytracing has become a focus of the computer graphics industry, with hardware and software alike beginning to make a transition from traditional raster-based 3D graphics to raytracing. And also like VR, raytracing places maximum demand on the GPU, motivating buyers with two symbiotic benefits for CAD visuals. (Furthermore, NVIDIA’s latest Quadro RTX series delivers on both with extra hardware acceleration for raytracing, as discussed here.)

Outfitting workstations and high-performance gaming rigs with two or more GPUs isn’t a new option. But it still is very much a niche option, as doing so offers limited value: One, even a single entry-level GPU can now drive four or more monitors, thereby eliminating one of the previous motivations to configure a second GPU; and two, teaming up multiple GPUs to render the same scene can certainly help performance, but it is not particularly efficient (for a variety of reasons not worth getting into here). However, here’s where the synergy between dual GPUs and VR comes in: It turns out allocating two GPUs to two different images can be extremely efficient, and precisely what VR needs to create both left and right eye images every frame. Accordingly, for those looking to push their VR solution to the max — and with a budget to match — upgrading to a dual-GPU workstation configuration provides a worthwhile avenue to consider. It’s not a cheap way to go, but it can deliver solid bang-for-the-buck.


VR represents an ideal use case for a dual-GPU configuration. Image source: NVIDIA.


It’s not a cheap choice, but dual, linked RTX-class Quadro GPUs can deliver on both performance and raytraced quality for the ultimate VR experience. Image source: NVIDIA.


All HMDs Are Not Created Equal

Of course, just as with a traditional desk-based 3D computing environment, there are two sides to the visual computing equation: creating the image and displaying the image (or in this case, the two stereo images). And just as you can dial up the rendering side of the VR equation, so too can you choose from a range of HMDs representing a wide spectrum in both price and capability.

Yes, in differentiating HMDs, you’ll consider specs you’d naturally think about from the display monitor world. Resolution and field of view are two that carry over from the desktop/laptop world, with higher resolution and wider field of view (FOV) generally scaling with price. Remember, however, that desktop or laptop monitor resolution is an apple to the HMD’s orange: Because the HMD’s typical AMOLED display (like that on your smartphone, as opposed to the LCD on your desk/laptop) is used in a position far closer to your eyes, comparable perceived quality requires much higher resolution. As such, baseline resolution for professional-caliber HMDs will start around 2K (width) and max out at around 8K. Varjo’s VR-1 boasts a pixel density of 60 pixels per degree of FOV, a key threshold above which the average human eye won’t detect any noticeable improvement (similar to the spirit behind Apple’s Retina display, but at a much closer distance to the eye).

A current snapshot of the emerging breadth in HMD products. Image source: Lenovo.

So yes, there are specs to compare that are common to monitor displays, but HMD features, especially at the higher end, extend well beyond conventional display metrics. HMDs need to manage far more functionality, serving as a sensor and controller within the virtual environment as well as the interface back to the host workstation. HMDs at a minimum need to sense three degrees of freedom (3DOF), discerning where the user’s head is “aimed” by monitoring position along three dimensions (rotation up and down, or pitch), across the horizon (yaw), and rotated along the line of sight (roll).

While 3DOF addresses angular orientation, however, it does nothing to indicate position in 3D Cartesian space: up/down, left/right, and forward/backward. That’s where 6DOF comes in. More expensive to support, 6DOF offers position in x, y, and z axes, allowing users to navigate in and around the 3D virtual world. Developed by Valve for SteamVR, and supported by most 6DOF-capable HMDs, Lighthouse is one currently popular tracking system for VR positioning. Like other “inside-out” approaches, Lighthouse requires the positioning of tracking stations in the physical area of use, bounding the space the VR user can roam.


Full 6DOF, with three degrees of freedom each for rotation and position. Image source: Horia Ionescu.

So which does a CAD user need, 3DOF or 6DOF? While viewing and manipulating a virtual model from static viewpoint position (with three degrees of rotation for viewing angles) might be acceptable for viewing the exploded detail of a brake assembly, for example, the same doesn’t hold for AEC applications. 6DOF is going to be of particular interest in applications where the whole point is to get inside the room or building model and virtually walk around to experience the space from every perspective. While 6DOF will obviously be most critical in a roamable environment, it’s of help even in a seated experience, where subtle changes in the eye’s 3D position in the virtual world provide the brain with critical cues on depth perception (for example, through motion parallax).

And while for the purposes of this series, we’ve lumped VR and AR mostly in the same technology basket, a quality AR experience — with its mix of synthetic and live natural images — demands more. For example, rather than simply making for an incrementally better experience, sensing depth and position is a must-have for AR, which needs the information necessary to determine precisely how to mix the natural and synthetic types in 3D space.


Consider eye tracking functionality as well, a step up from simply tracking head position and one required to leverage the benefits of advanced foveated rendering (discussed in part two).

 

1 2 


About the Author: Alex Herrera

Alex Herrera

Add comment




Download Cadalyst Magazine Special Edition