DDR4 Memory Technology: What's in a Number?30 Sep, 2014 By: Alex Herrera
Herrera on Hardware: Capable memory is essential for efficient workstation operation. But do you need the newest generation of memory technology?
Peak Bandwidth: The Theoretical Maximum Data Rate
With data available on every clock transition, DDR's maximum data transfer rate directly correlates to double the clock rate. DDR3 supports a data transfer rate between 800 and 2,133 Mb/s (megabits per second), while DDR4 supports a higher, but overlapping range between 1,600 and 3,200 Mb/s. One of the reasons that DDR4 can range to higher frequencies is that the standard's signal voltage has shrunk. That tighter voltage range is also the reason DDR4 offers lower power consumption (at the same frequency), an added bonus of the technology.
The first workstation-class implementations of DDR4 are centered on 2,133 Mb/s. That's a tick up from today's common 1,866 Mb/s for DDR3, but it's worth noting that it is a speed which premium DDR3 memory modules currently support. Eventually, the price-to-performance sweet spot will most surely rise into frequencies that DDR3 couldn't match. But at least for the time being, DDR4 solutions aren't necessarily providing maximum transfer rates beyond what DDR3 can manage.
Latency's Effect on the Achievable Data Rate
While a maximum bandwidth rate does say something about the maximum performance possible, taken alone it is far from deterministic. It turns out that the frequency with which a CPU's memory controller can achieve a rate approaching the theoretical maximum depends on how it addresses memory.
The reason stems from the way DRAM chips are constructed, with data bits stored in an array, addressable by column and row. When a processor requests a read, it presents an address, which eventually gets translated into a row and column of a specific DRAM chip (or chips). The delay between the moment that a new column address is presented and the time when the data is returned (column address strobe, or CAS, latency) is a key component in overall latency. Worse, if the address jumps to another row, the delay (row address strobe, or RAS, latency) gets even worse.
Computer architects play with memory organization and employ caches to hide or mitigate the impact of latency. Still, shorter latency is always better, and in that respect, DDR4 is not providing any improvement over DDR3. Now, both CAS and RAS latency get measured in clock cycles, not (nano)seconds, but at the same rate of 2,133 Mb/s, DDR4's latency is actually higher than its predecessors. That doesn't mean DDR4 will be significantly slower than DDR3, but it does mean it won't be any quicker to access first data — at least not at initial volume production speeds.
DDR3 vs. DDR4: Is It a Choice?
So if DDR4 doesn't provide a major performance advantage over DDR3 today, should you still choose it for your new CAD workstation(s)? That question has a few answers. First, DDR4 offers other advantages over DDR3, including higher memory density and lower power consumption. Second, the comparisons made today are for late-generation DDR3 versus first-generation DDR4. The latter will continue to evolve in speed over time to provide a bandwidth edge over the former, albeit one that again is memory-access dependent. Third, and perhaps most importantly, for most professionals there really isn't a choice to be made.
There's no question as to whether DDR4 overtake DDR3 — it will. Intel's platforms are making the move, and DRAM vendors are all ramping volume. Workstation vendors, including volume leaders HP, Dell, and Lenovo, have all launched new models based on Haswell-E that support DDR4. So if you're shopping for a new machine this fall, chances are you'll simply get DDR4. Even if you do have the choice during a transition period, it's not going to be a significant enough issue to make it a priority criterion in your purchase decision.
And that's all OK. The move to DDR4 is no game-changing revolution — nor was the move to DDR3 before it — but that doesn't mean it's not valuable. It's an essential evolutionary step the industry needs to take, to help continue providing the steady performance gains hard-core users demand from investment in new hardware.