The Traditional Computer Memory Hierarchy — and Its Impact on CAD Performance — Are Evolving18 Mar, 2020 By: Alex Herrera
Herrera on Hardware: The basic tenets of the tried-and-true memory hierarchy still apply, but it’s worth knowing how recent disruptors can improve performance — and perhaps shift the traditional balance.
SSDs and Optane Are Disrupting the Old Rules
The performance penalty associated with having undersized memory depends on how expensive (i.e., slow) it is to go to the next level of the hierarchy if a page being accessed is not physically resident in memory, and how often that happens. Therefore, the penalty incurred with a page fault (all else being equal) depends on how responsive underlying storage devices are — that is, how long before the first data word is returned, and how quickly successive data words are. Now, for the bulk of most of our histories working with computers, that storage device has been a hard disk drive (HDD). HDDs, to degrees depending on design and interface, are generally pretty darn slow, and therefore incur a significant performance penalty whenever a page fault occurs.
The emergence of solid-state drives (SSDs) over the past decade has improved the status quo, with systems today increasingly populating an SSD instead of — or more likely in addition to — larger-capacity HDDs. And for good reason, as SSDs offer both better response along with higher reliability (in the vast majority of cases). Today, the move from a SATA-based SSD sitting among your traditional (HDD) hard-disk drive bays is moving quickly with the advent of both dedicated NVMe electrical and M.2 mechanical interfaces. (This previous Herrera on Hardware column introduced the emergence and relative merits of modern NVMe SSDs several years back.
An even more recent technology influencing the traditional memory hierarchy — albeit one still maturing in the marketplace — is Intel’s Optane storage technology, which spurns ubiquitous NAND flash in favor of a completely different approach to non-volatile storage. Optane offers far superior performance characteristics to NAND storage (in most respects, but especially latency, with regular or random access), yet delivers much better data density and cost than DRAM. With those unique traits, Optane serves as an attractive option to create a new tier spanning the gap between DRAM and NAND SSDs in the pyramid.
Optane technology offers potential for a new tier in the existing memory hierarchy. Image source: Intel.
While falling in price over time and now reasonably accessible for performance-critical applications (thanks more to growing volume in servers and datacenters than PCs and workstations), Optane today remains an option beyond most mainstream workstation buyers. At a premium of about $500 per 256 GB or so, an Optane upgrade would cost around four times the more common upgrade of HDD to M.2 NVMe SSD. (For more on Optane, check out this previous Herrera on Hardware installment.)
Is the Old Hierarchy Balance Changing? Yes and No
Although those inverse relationships of cost and capacity versus performance moving up and down the pyramid are virtually universal, system providers and users can dial the size, performance, and technology options at each tier. And for CAD users — particularly those working with larger models of gigabytes and more — those configuration choices can improve performance at the sacrifice of dollars. Striking the balance of dollars and performance, while not a perfect exercise is worth paying attention to, particularly in the current era of both recently established and newly emerging technologies like NVMe SSDs and novel non-volatile memory technologies like Optane.
For example, now that they are becoming ubiquitous in workstations, it’s worth exploring if and how much NVMe SSDs might disrupt the old standard memory hierarchy pyramid. And though not remotely commonplace yet, we can raise the same question for Optane, which offers an even more dramatic shift in performance metrics for non-volatile memory. If the tiers immediately beneath our tried-and-true DRAM are now a much higher-bandwidth device with — most importantly — significantly lower latency, could that change the optimum balance of a CAD workstation’s memory pyramid? Or if not optimal, does it mean that we might be able to get away with less DRAM, because the storage devices called upon for a page fault are so darn fast?
The answer for SSDs — even the fastest NVMe SSDs — is generally no. The difference in latency relative to DRAM (both for a new random access or a new local access, the latter being much quicker) is still measured in multiple orders of magnitude. But we can simplify the premise a bit with one very reasonable assertion: At this point, every new CAD workstation should have an SSD of some size, whether or not that SSD is complemented with larger HDDs. This is fair, not necessarily because it is true today, but because it should be true today. No, it’s not on price parity with HDDs (which also continue to fall in cost/byte), but with an SSD at around a $240 (SATA) to $350 (M.2 NVMe) price premium (at 512 GB) and offering advantages in both performance and reliability, I’d argue that configuring a CAD professional’s mission-critical machine’s boot drive with an SSD should now be the default. (And in all but the most cost-conscious cases, snag the M.2 NVMe option, and outfit secondary data and backup storage with HDDs as desired.)
Once you consider the SSD a base option, don’t think of its presence as something that allows you to decrease the DRAM you configure, but something that can help reduce the performance penalties for worst-case workloads. That is, you still want to have your memory big enough that it’s not going to be the bottleneck, at least for the majority of your computing workloads. Especially given today’s pricing, for most I’d argue that means looking at 16 GB as a bare minimum, and have a good reason — e.g., a strict not-to-exceed budget, or confidence in consistent light demand — not to spring for 32 GB. If you’re among the fewer that deal often with very large datasets or compute-intensive, multithreaded simulations and renderings, you should consider budgeting another jump (or even two) higher. That is, while it’s not appropriate to think about an NVMe SSD as allowing you to cut back your DRAM size, the nice thing is that — compared to HDDs — it should at least help soften the performance blow when running into extreme data-intensive usage.
Something like Optane is a more difficult proposition to assess, however, at least from a technical perspective. With only one order of magnitude (roughly) slower response than DRAM, one might be tempted to spend a fewer dollars less on DRAM and trade that for an upgrade to some amount of Optane storage. But while it might be debatable if and to what degree that’s a good tradeoff technically, from a practical perspective it’s likely moot for most. Because if your purse strings are loose enough to spring for the premium drive, are you really going to skimp on relatively cheap DRAM? Optane will help mitigate the performance penalty and perhaps help you avoid spending overkill on DRAM, but those who are performance-conscious enough to spend on Optane won’t want to shoot themselves in the foot by cutting back on a sensible DRAM footprint.
In the coming months, I plan to revisit this topic with some experimentation of various mixes of DRAM and storage on real-world CAD applications and workloads to shed a more quantitative light on optimizing your machine’s balance of memory and storage, especially in the age of ubiquitous NVMe SSDs and emerging alternatives like Optane.
3 Ways Mid-Size Design Teams Are Driving Better Collaboration: White Paper - Now you can make it quick and simple for all practioners and stakeholders involved in design and engineering to share and find information, conduct collaborative design reviews, and manage contractural exchanges. Download your eBook here.