Today Rambus is announcing their second generation DDR5 RCD chip running at up to 5600MT/s. If you recall what SemiAccurate said about Intel and AMD’s DDR5 plans a few days ago, you will recognize that number.
RCD or Registering Clock Driver, also or that chip in the middle of the DIMM that you probably never knew the name of until just now, is vital to modern memory. It takes the clock and command/address signals and turns them into usable information for the memory die(s) itself. That memory can then dump the requested data out across a very wide memory bus back to the controller that requested the information. There are also DBs or Data Buffers in LR (Load Reduced) DIMMs that Rambus also makes which work with the RCDs. This has been an oversimplified memory tutorial, it isn’t meant to be comprehensive.
Rambus releases a new ‘spot the difference’ puzzle, can you win?
As you can see above there are a lot of differences between DDR4 and DDR5 some of which are immediately obvious and some not. Did you spot the fact that DDR4 is x9/72b wide and DDR5 is x10/80b wide? If so you may also have noticed why that is the case, a DDR5 DIMM is actually two channels of 40b wide each so if you want ECC coverage, you need 8B per side, 4b won’t cut it. This two channel layout also lets you do some nifty party tricks like shutting down half the DIMM to save power but you still have to run clock on both sides. That said, good luck pulling this trick off with DDR4.
Back to the details, not only does DDR5 split up the DIMM into two separate blocks with it’s own address and command signals, it also encodes the signals on one set of pins vs several with more dedicated functionality on DDR4. This allows them to keep the pin count down to DDR4 levels and still have more signalling capability, 10b vs 33b wide for the pedantic. Clocks go way up too so the net result should be more or less the same latency for those signals.
DDR4 vs DDR5 high level differences
One other neat trick is that the length of a packet often dictates efficiency, longer usually means less overhead so you can send more data at a given frequency. The down side to this is that PCs tend to require 8B/64b data bursts to fill a cache line in one go. More is often problematic or at least wasteful and less has a harsh speed/latency penalty. To make DDR5 more efficient the burst length was set to 16B or two cache lines which doesn’t work well with PCs…. unless you halve the width. DDR5 did just that and the two independent banks can each service a different request at once which will fill two separate cache lines at once.
On the more technical minutia side, DDR4 supported up to 16Gb dies officially, DDR5 goes up to 64Gb dies and 16-high stacks vs 8-high on DDR4. Out of the gate that means 256GB DIMMs without stacking are now possible but it will be a long time before we see that in practice. Voltage also drops from 1.2v on DDR4 to 1.1v on DDR5, plus the PMIC is now directly on the DIMMs instead of being on the board. All of these things together will mean a lot higher efficiency for DDR5.
Now that we have done the less oversimplified DDR5 tutorial, what exactly is Rambus making? They make the RCDs and DBs for DDR3, DDR4, DDR5, and NVDIMMs, all of which are on the market now. The DDR5 RCDs work at up to DDR5/4800 clocks which is fine for first gen servers that don’t have a DDR4 fallback like consumer parts. The new 2nd Gen Rambus RCDs and DBs up that number to DDR5/5600, a pretty impressive jump for one generation. These should be out in time to plug in to the second generation of DDR5 servers that need them so enablement is well under way already.
In the end DDR5 will have a lot of tricks to play vs DDR4. It starts out 50% faster, 4800MT/s vs 3200MT/s, than DDR4 and runs at a lower voltage for more efficiency. Toss in the dual channel party tricks and you add a lot more flexibility to the mix, plus reliability is said to be at least as good as the older parts. With the RCDs and DBs in production now for the first gen DDR5 and well underway for the second, it looks like the module side has things covered. Now we wait for the CPUs.S|A
Latest posts by Charlie Demerjian (see all)
- What’s going on with Qualcomm’s Oryon SoC? - Sep 26, 2023
- What is the code name for the next Qualcomm laptop SoC? - Sep 19, 2023
- How fast is Qualcomm’s Oryon SoC - Sep 19, 2023
- How is Qualcomm’s Oryon SoC doing? - Sep 12, 2023
- A new player enters the ARM laptop SoC space - Aug 16, 2023