DDR5 Server Chipsets Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Wed, 09 Apr 2025 22:34:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Memory Bandwidth and DDR5 MRDIMMs Explained in this Ask the Experts https://www.rambus.com/blogs/ask-the-experts-ddr5-mrdimms/ https://www.rambus.com/blogs/ask-the-experts-ddr5-mrdimms/#respond Tue, 12 Nov 2024 19:15:26 +0000 https://www.rambus.com/?post_type=blogs&p=65132 John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, recently shared the latest developments on the MRDIMM (Multiplexed Rank DIMM) DDR5 memory module architecture. This cutting-edge technology brings significant advances to memory bandwidth and capacity to support compute-intensive workloads including generative AI.

What is MRDIMM?

MRDIMM builds upon the existing DDR5 infrastructure to ease implementation while providing a substantial performance boost. Its architecture is designed to double the data rate per signal pin, significantly enhancing bandwidth while preserving DDR5 signal routing between hosts and memory modules. It does so by introducing key innovations such as parallel activation and access of DRAM ranks and data stream multiplexing, effectively unlocking higher data transfer rates.

Key Innovations of MRDIMM

The MRDIMM architecture enhances performance in important ways:

  1. Parallel DRAM Activation: MRDIMM enables pairs of DRAM ranks to be activated and accessed in parallel. This innovation plays a crucial role in increasing data throughput.
  2. Multiplexed Data Streams: By multiplexing data streams, MRDIMM can effectively double the data rate for each signal pin, resulting in a substantial improvement in memory bandwidth. MRDIMM 12800 provides a data rate of 12,800 MT/s using DDR5 DRAM that operate at 6,400 MT/s.
  3. Increased Capacity: Unlike traditional memory modules, MRDIMM supports more than two ranks of DRAM. This capability allows for increased memory capacity in a cost-efficient manner.

New Components for MRDIMM

To support its high-performance architecture, MRDIMM requires several new components, designed to work together seamlessly:

  • MRCD (Multiplexing Registering Clock Driver): The MRCD extends the typical registering clock driver function to receive an interleaved stream of DRAM commands at twice the typical RDIMM rate. It deinterleaves the command data stream and then steers it correctly to its rank-specific outputs.
  • MDB (Multiplexing Data Buffer): Ten of these chips per MRDIMM provide the multiplexing and demultiplexing necessary to convert a 16-bit DRAM interface running at native DRAM speed to an 8-bit host interface running at twice that speed.  MDBs also provide load isolation to the host or CPU, which is a key enabler for MRDIMM to increase the number of ranks and overall capacity of the module.
  • PMIC 5030 Power Management IC: Given the parallel activations of DRAM ranks and the additional chips added to the chipset, the absolute power envelope of the module is higher than a typical RDIMM. The new PMIC 5030 is designed to comfortably handle the amount of power required of such a high-bandwidth/high-capacity DIMM.

Flexibility and Compatibility

One of the standout features of MRDIMM is its compatibility as a drop-in replacement for server main memory upgrades. This design approach provides a high level of flexibility, allowing data centers and enterprises to adopt MRDIMM for enhanced memory performance while preserving DDR5 server architecture.

Rambus’ Expertise in High-Quality Memory Solutions

Eble emphasized Rambus’ long-standing commitment to developing reliable, interoperable memory solutions. With decades of expertise, Rambus is well positioned to lead advancements in memory technology, ensuring that MRDIMM modules meet the rigorous demands of modern computing environments.

Looking Ahead

As the memory demands of advanced workloads grow, innovations like MRDIMM represent critical enablers of the continued progression of computing performance. With its ability to increase both bandwidth and capacity while maintaining compatibility with existing server architecture, MRDIMM is poised to become an important element of cloud and enterprise data centers.

Watch the full video interview below or skip down the page to read the key takeaways.

Expert

John Eble, Vice President of Product Marketing for Memory Interface Chips, Rambus

Key Takeaways

1. Enhanced DDR5 Architecture: MRDIMM is a new DDR5 memory module architecture that significantly increases memory bandwidth and capacity by utilizing parallel access and multiplexing techniques.

2. Doubled Bandwidth: MRDIMM modules effectively double the data rate per signal pin which doubles the bandwidth available to the CPU per DIMM slot compared to standard DDR5 RDIMMs.

3. Increased DRAM Capacity: The new architecture allows for more than two ranks of DRAM, enabling cost-efficient capacity increases with configurations of up to 8 ranks of single or dual die packages.

4. New Memory Interface Chips: MRDIMM requires new and upgraded components, including the multiplexing registered clock driver (MRCD) and multiplexing data buffer (MDB), as well as a new power management IC (PMIC 5030) to handle higher power demands.

5. Future Roadmap: Servers utilizing MRDIMM 12800 are expected to launch in 2026, with future MRDIMM modules leveraging still faster DRAMs and advanced signaling innovations to achieve even higher speeds and capacities.

Key Quote

One of the nice things about this technology is that it can be a drop-in replacement. A single motherboard design will support both MRDIMM and RDIMM as the DIMM connector is the same and the routing topology, the physical layer, is the same from the host to the DIMM. So, users do not need to decide on MRDIMM or RDIMM when designing their servers, or even when initially deploying a server as they can always come back at a later time and upgrade. This provides a lot of flexibility through the life cycle of the server.

]]>
https://www.rambus.com/blogs/ask-the-experts-ddr5-mrdimms/feed/ 0
DDR5 vs DDR4 DRAM – All the Advantages & Design Challenges https://www.rambus.com/blogs/get-ready-for-ddr5-dimm-chipsets/ https://www.rambus.com/blogs/get-ready-for-ddr5-dimm-chipsets/#respond Mon, 29 Jul 2024 19:30:04 +0000 https://www.rambus.com/?post_type=blogs&p=22095 [Last updated on: July 29, 2024] On July 14th, 2021, JEDEC announced the publication of the JESD79-5 DDR5 SDRAM standard signaling the industry transition to DDR5 server and client dual-inline memory modules (DIMMs). DDR5 memory brings a number of key performance gains to the table, as well as new design challenges. Computing system architects, designers, and purchasers want to know what’s new in DDR5 vs DDR4 and how they can get the most from this new generation of memory.

In this article:

Performance: what changes in DDR5 vs DDR4 DRAM?

The top seven most significant specification advances made in the transition from DDR4 to DDR5 DIMMs are shown in the table below.

DDR5 vs DDR4 Comparison Table
DDR5 changes and advantages over DDR4 DIMMs

1. DDR5 Scales to 8.4 GT/s

You can never have enough memory bandwidth, and DDR5 helps feed that insatiable need for speed. While DDR4 DIMMs top out at 3.2 gigatransfers per second (GT/s) at a clock rate of 1.6 gigahertz (GHz), initial DDR5 DIMMs delivered a 50% bandwidth increase to 4.8 GT/s. DDR5 memory will ultimately scale to a data rate of 8.4 GT/s. New features, such as Decision Feedback Equalization (DFE), were incorporated in DDR5 enabling the higher IO speeds and data rates.

2. Lower Voltage Keeps Power Manageable

A second major change is a reduction in operating voltage (VDD), and that helps offset the power increase that comes with running at higher speed. With DDR5, the DRAM, the registering clock driver (RCD) voltage drops from 1.2 V down to 1.1 V. Command/Address (CA) signaling is changed from SSTL to PODL, which has the advantage of burning no static power when the pins are parked in the high state.

3. New Power Architecture for DDR5 DIMMs

A third change, and a major one, is power architecture. With DDR5 DIMMs, power management moves from the motherboard to the DIMM itself.  DDR5 DIMMs will have a 12-V power management IC (PMIC) on DIMM allowing for better granularity of system power loading. The PMIC distributes the 1.1 V VDD supply, helping with signal integrity and noise with better on-DIMM control of the power supply.

4. DDR5 vs DDR4 Channel Architecture

Another major change with DDR5, number four on our list, is a new DIMM channel architecture. DDR4 DIMMs have a 72-bit bus, comprised of 64 data bits plus eight ECC bits. With DDR5, each DIMM will have two channels. Each of these channels will be 40-bits wide: 32 data bits with eight ECC bits. While the data width is the same (64-bits total) having two smaller independent channels improves memory access efficiency. So not only do you get the benefit of the speed bump with DDR5, the benefit of that higher MT/s is amplified by greater efficiency.

In the DDR5 DIMM architecture, the left and right side of the DIMM, each served by an independent 40-bit wide channel, share the RCD. In DDR4, the RCD provides two output clocks per side. In DDR5, the RCD provides four output clocks per side. In the highest density DIMMs with x4 DRAMs, this allows each group of 5 DRAMs (single rank, half-channel) to receive its own independent clock. Giving each rank and half-channel an independent clock improves signal integrity, helping to address the lower noise margin issue raised by lowering the VDD (from change #2 above).

5. Longer Burst Length

The fifth major change is burst length. DDR4 burst chop length is four and burst length is eight. For DDR5, burst chop and burst length will be extended to eight and sixteen to increase burst payload. Burst length of sixteen (BL16), allows a single burst to access 64 Bytes of data, which is the typical CPU cache line size. It can do this using only one of the two independent channels. This provides a significant improvement in concurrency and with two channels, greater memory efficiency.

6. DDR5 Supports Higher Capacity DRAM

A sixth change to highlight is DDR5’s support for higher capacity DRAM devices. With DDR5 buffer chip DIMMs, the server or system designer can use densities of up to 64 Gb DRAMs in a single-die package. DDR4 maxes out at 16 Gb DRAM in a single-die package (SDP). DDR5 supports features like on-die ECC, error transparency mode, post-package repair, and read and write CRC modes to support higher-capacity DRAMs. The impact of higher capacity devices obviously translates to higher capacity DIMMs. So, while DDR4 DIMMs can have capacities of up to 64 GB (using SDP), DDR5 SDP-based DIMMs quadruple that to 256 GB.

7. A Smarter DIMM with DDR5

The DDR5 server DIMM chipset replaces the DDR4 SPD IC with an SPD Hub IC and adds two temperature sensor (TS) ICs. The SPD Hub has an integrated TS, which in conjunction with the two discrete TS ICs, provides three points of thermal telemetry from the RDIMM.

With DDR5, the communication bus between chips gets an upgrade to I3C running 10X faster than the I2C bus used in DDR4. The DDR5 SPD Hub handles communication from the module to the Baseboard Management Controller (BMC). Using the faster I3C protocol, the DDR5 SPD Hub reduces initialization time and supports a higher rate of polling and real-time control.

Thermal information, communicated from the SPD Hub to the BMC, can be used to manage cooling fan speed. DRAM refresh rate can now be more finely managed to provide for higher performance or higher retention, and if the RDIMM is running too hot, bandwidth can be throttled as needed to reduce the thermal load.

 

What are the DDR5 Design Challenges?

DDR5 DIMM Chipset
DDR5 RDIMMs Showing Rambus Memory Interface Chips

These changes in DDR5 introduce a number of design considerations dealing with higher speeds and lower voltages – raising a new round of signal integrity challenges. Designers will need to ensure that motherboards and DIMMs can handle the higher signal speeds. When performing system-level simulations, signal integrity at all DRAM locations needs to be checked.

For DDR4 designs, the primary signal integrity challenges were on the dual-data-rate DQ bus, with less attention paid to the lower-speed command address (CA) bus. For DDR5 designs, even the CA bus will require special attention for signal integrity. In DDR4, there was consideration for using differential feedback equalization (DFE) to improve the DQ data channel. But for DDR5, the RCD’s CA bus receivers will also require DFE options to ensure good signal reception.

The power delivery network (PDN) on the motherboard is another consideration, including up to the DIMM with the PMIC. Considering the higher clock and data rates, you will want to make sure that the PDN can handle the load of running at higher speed, with good signal integrity, and with good clean power supplies to the DIMMs.

The DIMM connectors from the motherboard to the DIMM will also have to handle the new clock and data rates. For the system designer, at the higher clock speeds and data rates around the printed circuit board (PCB), more emphasis must be placed on system design for electromagnetic interference and compatibility (EMI and EMC).

How do DDR5 memory interface chipsets harness the advantages of DDR5 for DIMMs?

The good news is that DDR5 memory interface chips improve signal integrity for the command and address signals sent from the host memory controller to the DIMMs. The bus for each of the two channels goes to the RCD and then fans out to the two halves of the DIMM. The RCD effectively reduces the loading on the CA bus that the host memory controller sees.

The expanded chipset including PMIC, SPD Hub and TS enable a smarter DIMM which can operate at the higher data rates of DDR5 while remaining within the desired power and thermal envelope.

Rambus offers a full DDR5 memory interface chipset that helps designers harness the full advantages of DDR5 while dealing with the signal integrity challenges of higher data, CA and clock speeds. Rambus was the first in the industry to deliver a DDR5 RCD to 5600 MT/s and is continually advancing the performance of its DDR5 solutions to meet growing market needs. The Rambus DDR5 RCD has now reached performance levels of 7200 MT/s.

As DDR5 evolves and makes its way to the client space, the Rambus DDR5 client memory interface chipset enables client DIMMs (CSODIMMs and CUDIMMs) to deliver new levels of memory performance for demanding gaming, content creation and AI workloads on PCs. The DDR5 Client DIMM Chipset includes a DDR5 Client Clock Driver (CKD) and Serial Presence Detect Hubs (SPD Hub).

As a renowned leader in signal integrity (SI) and power integrity (PI), Rambus has over 30 years’ experience in enabling the highest performance systems in the market.

Additional resources on DRR5:
What’s Next for DDR5 Memory?
Data Center Evolution: DDR5 DIMMs Advance Server Performance

]]>
https://www.rambus.com/blogs/get-ready-for-ddr5-dimm-chipsets/feed/ 0
John Eble Dives into Chipsets for Server and Client Systems in Ask the Experts https://www.rambus.com/blogs/ask-the-experts-ddr5-client-chipset/ https://www.rambus.com/blogs/ask-the-experts-ddr5-client-chipset/#respond Mon, 29 Jul 2024 18:31:51 +0000 https://www.rambus.com/?post_type=blogs&p=64829 In this episode of “Ask the Experts,” John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, discusses the development of advanced chipsets for both server and client systems. He highlighted the need for robust chipsets to maintain the precise timing required in memory subsystems as frequencies increase.

Eble also introduced Rambus’ new Client Clock Driver (CKD), which enables DDR5 client DIMMs (CSODIMMs and CUDIMMs) operating at data rates of 6400 Megatransfers per second and above. Applications, led by AI, continue to push for higher data rates and greater memory capacity.

At 6400 MT/s and higher, the CKD buffers and retimes the clock to ensure the synchronous memory system can operate within its timing budget. Absent the CKD, that would not be possible because of jitter on the clock signal.

Eble concluded by underscoring Rambus’s expertise in managing power and signal integrity, its high-volume experience, and its strong partnerships in the memory industry across the supply chain.

Expert

  • John Eble, VP of Product Marketing for Memory Interface Chips, Rambus

Key Takeaways

  1. Advanced Chipset Development: Rambus is developing advanced chipsets for both server and client systems to meet the need for more bandwidth and capacity while maintaining the power envelope of memory modules. These chipsets address challenges related to meeting the precise timing required in synchronous memory subsystems.
  2. New Client Clock Driver: Rambus has announced a Client Clock Driver (CKD) that reduces jitter and timing uncertainty in PC memory systems. This CKD re-drives the clock, restoring its amplitude and re-timing the signal to reduce noise. This allows for sufficient margin to close the timing budget as data rates scale to 6400 and beyond.
  3. Managing High Frequency Jitter: The need for the CKD becomes more critical as clock frequencies increase. At higher frequencies, interference mechanisms that reduce signal integrity, such as reflections due to impedance discontinuities and crosstalk, become more pronounced. The CKD helps manage overall jitter and close the timing budget for a robust system.
  4. AI Driven Performance Enhancements: AI applications are driving the need to push up performance in client systems. AI inferencing requires high memory capacity and bandwidth for real-time results from large models. Rambus’ CKD supports PCs and memory modules running at 6400 MT/s and extends up to the next speed bin on the roadmap of 7200.
  5. Rambus’ Memory Innovation: Rambus brings over 30 years of innovation in the memory space and renowned expertise in managing power integrity and signal integrity. With the CKD, Rambus brings that experience to the client space. The company has a strong track record as a first-class semiconductor product company with high volume experience and strong partnerships in the supply chain, memory industry, and with end customers.

Key Quote

As we continue to go to higher and higher frequencies, we do see the need for a more robust chipset for client DIMMs. The real challenge in these systems, both server and client, is to make sure that the accumulated timing uncertainty or jitter in the round trip is low enough so that there is sufficient timing margin at these very high speeds. Fundamentally, the clock driver is solving a jitter and timing uncertainty problem. It reduces jitter in the synchronous memory system and provides sufficient margin to close the timing budget as we scale up to data rates of 6400 and beyond.

Related Content

]]>
https://www.rambus.com/blogs/ask-the-experts-ddr5-client-chipset/feed/ 0
Power Management: A Key Enabler of Memory Performance https://www.rambus.com/blogs/power-management-a-key-enabler-of-memory-performance/ https://www.rambus.com/blogs/power-management-a-key-enabler-of-memory-performance/#respond Tue, 30 Apr 2024 14:44:14 +0000 https://www.rambus.com/?post_type=blogs&p=64455 In planning for DDR5, the industry laid out ambitious goals for memory bandwidth and capacity while aiming to maintain power within the same envelope on a per module basis. In order to achieve these goals, DDR5 required a smarter DIMM architecture; one that would embed more intelligence in the DIMM and increase its power efficiency. One of the largest architectural changes of this smarter DIMM architecture was moving power management from the motherboard to an on-module Power Management IC (PMIC) on each DDR5 RDIMM.

This change followed the trend in microelectronic systems, that to optimize power, it’s best to deliver as high a voltage as possible to the endpoint where the power is consumed. Then at the endpoint, regulate that incoming high voltage into the lower voltages with higher currents required by the endpoint components.

In previous DDR generations, the regulator was on the motherboard, and it had to deliver a low voltage at high current across the motherboard, through a connector and then onto the DIMM. As supply voltages were reduced over time (to maintain power levels at higher data rates), it was a growing challenge to maintain the desired voltage level because of IR drop. By implementing a PMIC on the DDR5 RDIMM, the problem with IR drop was essentially eliminated.

In addition, the on-DIMM PMIC allows for very fine-grain control of the voltage levels supplied to the various components on the DIMM. As such, DIMM suppliers can really dial in the best power levels for the performance target of a particular DIMM configuration.

The upshot is that power management has become a major enabler of increasing memory performance. Advancing memory performance has been the mission of Rambus for nearly 35 years. We’re intimate with memory subsystem design on modules, with expertise across many critical enabling technologies, and have demonstrated the disciplines required to successfully develop chips for the challenging module environment with its increased power density, space constraints and complex thermal management.

As part of the development of our industry-leading DDR5 memory interface chipset, and given our heritage and mission, Rambus built a world-class power management team and has now introduced a new family of best-in-class DDR5 server PMICs. This new server PMIC product family lays the foundation for a roadmap of future power management chips. As AI continues to expand from training to inference, increasing demands on memory performance will extend beyond servers to client systems and drive the need for new PMIC solutions tailored for emerging use cases and form factors.

We recently sat down with John Eble, vice president of product marketing for Rambus Memory Interface Chips, to learn more about DDR5 module technology including PMICs. Watch the video below:

]]>
https://www.rambus.com/blogs/power-management-a-key-enabler-of-memory-performance/feed/ 0
DDR5 Memory Interface Chips on the latest Ask the Experts https://www.rambus.com/blogs/ask-the-experts-ddr5-memory-interface-chips/ https://www.rambus.com/blogs/ask-the-experts-ddr5-memory-interface-chips/#respond Mon, 29 Apr 2024 17:42:05 +0000 https://www.rambus.com/?post_type=blogs&p=64826 In this episode of “Ask the Experts,” John Eble, Vice President of Product Marketing for Memory Interface Chips at Rambus, discusses advancements in DDR5 Server RDIMM memory modules. Eble highlights the four critical logic components of DDR5 RDIMMs, two of which are enhanced versions of chips used in DDR4 memory modules, and two are new.

The registering clock driver (RCD) has been upgraded for improved signal integrity to the DRAM, while the serial presence detect (SPD) IC now includes a temperature sensor and a bi-directional ability to buffer the system management bus. The two new components are a standalone temperature sensor and a power management integrated circuit (PMIC) for improved power integrity.

Eble also discusses the new Rambus family of server PMICs, designed to optimize power efficiency for different output currents. The Extreme Current PMIC (PMIC5020) from Rambus is an industry-leading device that will support high capacity RDIMMs and future server platforms operating at 7200 Megatransfers per second (MT/s) likely to launch in 2025.

These new servers are part of the unprecedented rate of new platform introductions driven by high-performance workloads with generative AI being the prime example. Rapid advancements in DDR5 memory will continue to provide the bandwidth and capacity needed by these compute-intensive applications.

Speakers

  • John Eble, VP of Product Marketing for Memory Interface Chips, Rambus

Key Takeaways

  1. Advanced Chipset for DDR5 RDIMMs: DDR5 RDIMMs depend on four critical logic components, two of which are enhanced devices from DDR4 and two are brand new. These components, the Register Clock Driver, Signal Presence Detect Hub, Power Management IC and Temperature Sensor, enable the bandwidth and capacity of DDR5 while maintaining power with the same per module envelope.
  2. Power Management Integrated Circuit: The addition of the Power Management Integrated Circuit (PMIC) on the DIMM is a significant change in DDR5. It improves power integrity, eliminates concerns about IR drop, and allows fine grain control of voltage levels. This change follows the trend in microelectronic systems to optimize power by delivering as high a voltage as possible to the endpoint and then regulating into lower voltages with higher currents there.
  3. Rambus’ Server PMICs Family: Rambus has announced a new family of Server PMICs which support a broad range of use cases. The key difference among the three family members is the targeted output current of each. The PMIC 5010 targets a total current of 12 amps, the PMIC 5000 targets a current output of about 20 amps, and the PMIC 5020 supports the highest current levels of up to 30 amps.
  4. Extreme Current Needed: The PMIC 5020 is targeted towards the highest capacity RDIMMs and is expected to be used in platforms launching at speeds of 7200 MT/s, likely in 2025. There may be earlier opportunities for the 5020 addressing special high-bandwidth RDIMMs launching later this year.
  5. AI Driving the Pace: The unprecedented rate of change in DDR5-based server is being driven by advanced workloads data center, particularly generative AI. This is leading to an insatiable appetite for more bandwidth and capacity. As a result, new innovations are being built into each subsequent generation of DDR5 to hit the required speeds.

Key Quote

The addition of the PMIC on the DIMM is a significant change in DDR5, probably one of the bigger ones. It’s really following the trend of what’s going on in microelectronic systems that to optimize the power system, it’s best to deliver as high a voltage as possible to the endpoint and then do the regulation into the lower voltages with higher currents there. There’s both technical and economic reasons why this choice was made for DDR5. The technical reason is really increased power integrity which is needed to enable higher speeds.

Related Content

]]>
https://www.rambus.com/blogs/ask-the-experts-ddr5-memory-interface-chips/feed/ 0
New 7200 MT/s RCD Supports Ambitious Server Roadmap https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/ https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/#respond Wed, 27 Dec 2023 22:00:30 +0000 https://www.rambus.com/?post_type=blogs&p=63690 We’re witnessing an unprecedented time for computing. Advanced data center workloads, with Generative AI leading the pack, have set a blistering pace for hardware performance improvements. The platform vendors are responding with the most ambitious server roadmap ever seen. For example, the just introduced 5th Gen Intel® Xeon® Processor came just a year after its predecessor. The 4th Gen Xeon used 4800 MT/s DDR5 memory, 5th Gen pushed performance up with 5600 MT/s DDR5.

The Rambus Gen4 DDR5 RCD boosts the data rate to 7200 MT/s.

The RCD is the key control plane chip on a DDR5 RDIMM

To support that accelerated server roadmap, Rambus, as a leader in cutting-edge memory chip solutions, needs to keep advancing the performance of its Registering Clock Drivers (RCD). The RCD is the key control plane chip on an RDIMM, providing clocks and command/address (C/A) signals to the DRAMs. It’s like a conductor, keeping the symphony of memory operations in sync. Above and beyond that, the C/A signals from the RCD tell each DRAM the location and operation (read or write) for data.

Today marks another important milestone in the DDR5 journey as we announce that we have advanced the performance of our DDR5 RCD to 7200 MT/s. With a 50% increase in data rate and bandwidth over current production 4800 MT/s solutions, the Rambus 7200 MT/s DDR5 RCD enables a new level of main memory performance for data center servers. Delivering industry-leading latency and power, it offers optimized timing parameters for improved RDIMM margins.

The Rambus RCD is the flagship of our DDR5 memory interface chipset, built on over 30 years of high-performance memory experience and our company’s renowned signal integrity (SI) / power integrity (PI) expertise. The chipset also includes Serial Presence Detect (SPD) Hub and Temperature Sensors, two more key components for server systems. The SPD Hub and Temperature Sensors improve DDR5 DIMM system and thermal management in order to achieve higher performance levels within the desired power envelope.

The demands on data center servers will continue their rapid rise, and memory is a critical enabler of greater server performance. As a leader in memory interface chips, customers can count on Rambus to deliver state-of-the-art solutions ahead of the market need as with our new 7200 MT/s RCD announced here.

]]>
https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/feed/ 0
[Infographic]: DDR5 – Powering the Next Generation of Data Centers https://www.rambus.com/blogs/infographic-ddr5-powering-the-next-generation-of-data-centers/ https://www.rambus.com/blogs/infographic-ddr5-powering-the-next-generation-of-data-centers/#respond Thu, 16 Feb 2023 22:31:41 +0000 https://www.rambus.com/?post_type=blogs&p=62273 Advanced memory technology is needed to support higher DRAM capacity and bandwidth. See how DDR5 can help.

Advanced memory technology is needed to support higher DRAM capacity and bandwidth. See how DDR5 can help.

]]>
https://www.rambus.com/blogs/infographic-ddr5-powering-the-next-generation-of-data-centers/feed/ 0
Rambus DDR5 RCD Takes Performance to 6400 MT/s https://www.rambus.com/blogs/rambus-ddr5-rcd-takes-performance-to-6400mts/ https://www.rambus.com/blogs/rambus-ddr5-rcd-takes-performance-to-6400mts/#respond Wed, 01 Feb 2023 22:00:49 +0000 https://www.rambus.com/?post_type=blogs&p=62230 We have said it before, and we will say it again, you can never have enough memory bandwidth. Nowhere is this statement truer than in the data center where advanced workloads for high-performance computing (HPC) and artificial intelligence/machine learning (AI/ML) continue to demand unprecedented levels of bandwidth, and then some more.

DDR5 memory is set to be a game changer in this respect, and as DDR5 scales to offer new levels of performance, Rambus continues to set the pace as a leader in cutting-edge DDR5 memory chip solutions.

Today marks another important milestone in the DDR5 journey as we announce that we have advanced the performance of our DDR5 Registering Clock Driver (RCD) to 6400 MT/s. We were first in the industry to 5600 MT/s, and we have raised the bar once again, ready to support another major performance upgrade in DDR5 RDIMMs. You can read more in today’s press here [insert link to PR].

With a 33% increase in data rate and bandwidth over current production 4800 MT/s solutions, the Rambus 6400 MT/s DDR5 RCD enables a new level of main memory performance for data center servers. Delivering industry-leading latency and power, it offers optimized timing parameters for improved RDIMM margins.

With DDR5 memory, more intelligence is built into the RDIMMs enabling over double the data rate and four times the capacity of DDR4 RDIMMs, while at the same time increasing memory and power efficiency.

The Rambus DDR5 memory interface chipset is built on over 30 years of high-performance memory experience and the company’s renowned signal integrity (SI) / power integrity (PI) expertise. The chipset includes the RCD, as well as Serial Presence Detect (SPD) Hub and Temperature Sensors, two key components for server systems. The SPD Hub and Temperature Sensors improve DDR5 DIMM system and thermal management in order to achieve higher performance levels within the desired power envelope.

DDR5 is set to eventually scale to 8400 MT/s and the journey will be an exciting one as we see each new generation of servers launched into the market.

]]>
https://www.rambus.com/blogs/rambus-ddr5-rcd-takes-performance-to-6400mts/feed/ 0
DDR5 Delivers More Bandwidth and Capacity with a Smarter DIMM https://www.rambus.com/blogs/ddr5-delivers-more-bandwidth-and-capacity-with-a-smarter-dimm/ https://www.rambus.com/blogs/ddr5-delivers-more-bandwidth-and-capacity-with-a-smarter-dimm/#respond Mon, 18 Jul 2022 21:00:52 +0000 https://www.rambus.com/?post_type=blogs&p=61720 The first wave of DDR5-based servers sport RDIMMs running at 4800 megatransfers per second (MT/s). This is a 50% increase in data rate over top-end 3200 MT/s DDR4 RDIMMs in previous generation high-performance servers. DDR5 memory incorporates a number of innovations, including Decision Feedback Equalization (DFE) and a new DIMM architecture, which enable that speed grade jump and support future scaling.

DDR5 also supports higher capacity DRAM devices. With DDR5 DIMMs, server and system designers will ultimately be able to use densities of up to 64 Gb in a single-die package (SDP). DDR4 maxes out at 16 Gb DRAM in an SDP. DDR5 supports features like on-die ECC, error transparency mode, post-package repair, and read and write CRC modes to support higher-capacity DRAMs. The impact of higher capacity devices obviously translates to higher capacity RDIMMs. So, while DDR4 RDIMMs can have capacities of up to 64 GB (using SDP), DDR5 SDP-based RDIMMs quadruple that to 256 GB in the future.

In order to achieve higher bandwidth and capacity, while maintaining reliability, availability and serviceability (RAS) features, boot time performance and staying within the desired power envelope, DDR5 requires a “smarter DIMM.” To achieve that, greater intelligence is built into a DDR5 RDIMM through the addition of new and more capable support chips. Two of these are the SPD Hub and Temperature Sensor ICs.

DDR4 had a Serial Presence Detect (SPD) IC that provided module information via I2C (~1 MHz data rate) to the Baseboard Management Controller (BMC). The DDR5 SPD Hub scales that communication up to 10 MHz with the faster I3C protocol. It aggregates BMC communication from the module for all the other support chips and has a built-in temperature sensor. With the faster I3C communication, the DDR5 SPD Hub reduces initialization time and supports a higher rate of polling and real-time control.

In addition and new with DDR5, there are two discrete Temperature Sensor ICs on the RDIMM. In concert with the SPD Hub’s internal temperature sensor, that provides three points of thermal telemetry on the DIMM. This thermal information, communicated from the module by the SPD Hub to the BMC, can then be used to manage cooling fan speed. DRAM refresh rate can now be more finely managed to provide for higher performance or higher retention, and if the RDIMM is running too hot, bandwidth can be throttled as needed to reduce the thermal load.

DDR5 RDIMMs Showing Rambus Memory Interface Chips
Rambus Expands Portfolio of DDR5 Memory Interface Chips for Data Centers and PCs

Rambus, as a renowned leader in memory interface chips, today announced the availability of SPD Hub and Temperature Sensor ICs for server DDR5 RDIMMs, LRDIMMs and NVDIMMs. The SPD Hub also supports DDR5 UDIMM and SODIMM memory modules for PCs. You can read the press release here.

]]>
https://www.rambus.com/blogs/ddr5-delivers-more-bandwidth-and-capacity-with-a-smarter-dimm/feed/ 0
How Rambus is Making Data Faster and Safer in 2022 and Beyond  https://www.rambus.com/blogs/rambus-2021-wrapped/ https://www.rambus.com/blogs/rambus-2021-wrapped/#respond Thu, 27 Jan 2022 16:28:59 +0000 https://www.rambus.com/?post_type=blogs&p=61205 Throughout 2021 and early 2022, Rambus has continued to make data faster and safer with the launch of key products, industry initiatives, and strategic partnerships. To address the insatiable demand for more bandwidth in the data center, we announced our 8.4 Gbps HBM3-Ready Memory Subsystemconfirmed the sampling of our DDR5 5600 MT/s 2nd-generation RCD chip, demonstrated our PCI Express® (PCIe) 5.0 digital controller IP on leading FPGA platforms, and unveiled our CXL Memory Interconnect Initiative. Looking ahead to 2022 and beyond, these products, initiatives, and partnerships will help power the next generation of bandwidth-hungry AI/ML applications and support the new accelerators and servers arriving in data centers over the coming months. 

We also continued to meet increased demand for a hardware-based security paradigm across multiple verticals, including the IoT and automotive markets. To help protect IoT devices, Kyocera selected the FIPS 140-2 CMVP-certified Rambus RT-130 Root of Trust and AES-IP-38 AES Accelerator, while NextChip chose the Rambus RT-640 Root of Trust and MACsec-IP-160 Protocol Engine to secure its Apache6 automotive processor. As high-profile security exploits, breaches, and counterfeit silicon multiply in 2022, we will see an increasing emphasis placed on a hardware-based security paradigm in both the IoT and automotive spaces. To be sure, we expect a proliferation of dedicated silicon that is specifically designed to protect sensitive cryptographic functions and data. This model is the most effective way to secure data when at rest (processed or stored in a device) and when in motion (communicated between connected devices). 

Let’s take a more in-depth look at how Rambus continues to make data faster and safer in 2022 and beyond.  

Faster Speeds for Higher Bandwidth

HBM3

In the summer of 2021, we announced our HBM3-ready memory interface subsystem comprising a fully integrated PHY and digital controller. Supporting breakthrough data rates of up to 8.4 Gbps, the solution delivers over a terabyte per second of bandwidth—more than double that of high-end HBM2E memory subsystems. According to Soo Kyoum Kim, associate VP, Memory Semiconductors at IDC, the memory bandwidth requirements of AI/ML training are “insatiable,” with leading-edge training models now surpassing billions of parameters. As Kim emphasizes, the Rambus HBM3-ready memory subsystem “raises the bar” for performance enabling state-of-the-art AI/ML and HPC applications.

According to Joel Hruska of ExtremeTech, early HBM3 hardware should be capable of ~1.4x more bandwidth than current HBM2E. However, as the standard evolves, that figure will rise to ~1.075TB/s of memory bandwidth per stack, with maximum I/O transfer rates of up to 8.4Gbps. 

“These figures are per stack and many GPUs use HBM with 2-4 stacks, so total bandwidth provided by a four-stack HBM3 solution at 665GB/s is ~2.7TB/s,” he adds. It should be noted that Both AMD (Genoa) and Intel (Sapphire Rapids) are expected to begin shipping their respective HBM-equipped server processors in 2022. 

DDR5

In late 2021, we confirmed the sampling of our 5600 MT/s 2nd-generation RCD chip with major DDR5 memory module (RDIMM) suppliers. The new level of performance represents a 17% increase in data rate over the first-generation 4800 MT/s Rambus DDR5 RCD. With DDR5 memory, more intelligence is built into the DIMMs, enabling up to double the data rate and four times the capacity of DDR4 DIMMs, while at the same time reducing power and increasing memory efficiency. 

According to Shane Rau, research vice president, Computing Semiconductors at IDC, advanced workloads are driving the increased demand for greater memory bandwidth. 

“It [is therefore] essential that DDR5 ecosystem players like Rambus continue to raise the bar on performance to meet the rapidly rising needs of data center applications,” says Rau. 

As we noted in our introduction, Rambus memory interface chips will enable next-generation DDR5-based servers to achieve new levels of performance. These new servers are slated to hit data centers in 2022 and beyond, with RDIMMs running at 4800 MT/s. This number represents a 33% increase in data rate over top-end 3200 MT/s DDR4 RDIMMs in current high-performance servers. 

CXL™ Memory Interconnect Initiative

In the closing months of 2021, we announced our CXL Memory Interconnect Initiative to develop semiconductor solutions for advanced data center architectures that maximize performance, improve efficiency, and reduce system cost. Compute Express Link™ (CXL) is an open industry standard interconnect delivering high-bandwidth, low-latency connectivity between dedicated compute, memory, I/O and storage elements within the data center to allow the provision of the optimal mix of each for a given workload.  

CXL memory expansion and pooling chips are key components for both traditional and disaggregated architectures. To support the continuing growth and specialization in server workloads, data centers are moving to disaggregated architectures composed from shared and scalable pools of computing and memory resources. CXL is a critical enabler of these next-generation disaggregated server architectures.

According to Matt Jones, general manager of IP cores at Rambus, CXL interconnects are quite versatile due to their high-bandwidth, low-latency characteristics—and can therefore be used to interconnect various hosts and resources in the system. 

“We see the device evolving here into one that supports multiple hosts on the upstream side and being able to share efficiently a pool of memory on the downstream side so that you can assign multiple hosts to efficiently share that memory,” Jones tells The Next Platform. “The key building blocks are here from an IP standpoint that tie back to the acquisitions we made on the PHY and the controller on both sides.”

Securing Silicon to Protect Data 

Kyocera Selects Rambus Root of Trust for IoT Security

In early 2022, Rambus announced that Kyocera Evolution Series MFPs will offer data security meeting Federal Information Processing Standards (FIPS) 140-2 Cryptographic Module Validation Program (CMVP) standards using the Rambus RT-130 Root of Trust and AES-IP-38 AES Accelerator IP. Specifically, the FIPS-certified Kyocera Evolution Series MFPs utilize the Rambus RT-130 Root of Trust and AES-IP-38 AES Accelerator as part of a system security architecture that provides the most robust and up-to-date protection for customers.

According to Neeraj Paliwal, general manager of security IP at Rambus, secure by design is a fundamental property of solutions from industry leaders like Kyocera. By building on Rambus FIPS CMVP-certified IP solutions, chip and system providers can better navigate the certification process and accelerate the development of secure solutions.  

This is increasingly important in 2022 and beyond. Because data centers have transformed into virtual fortresses (both in the physical and digital domains), adversaries have turned their focus to more vulnerable edge and end points. 

NextChip Selects Rambus Security IP to Secure Apache6 Automotive Processor

In January 2022, NextChip selected the Rambus RT-640 Root of Trust and MACsec-IP-160 Protocol Engine to provide hardware-level security for its next-generation Apache6 automotive processor. The Apache6 ADAS SoC combines CPU, GPU, ISP, and NPU processors to enable advanced automotive vision and domain/zone controller applications such as AVP. 

The Rambus RT-640 Root of Trust provides security services and protection of data processed by the Apache6 SoC. The RT-640 is a powerful security co-processor featuring automotive grade embedded security software, high-performance cryptographic accelerators for AES, HMAC, SHA-2, and more. In addition, dedicated safety integrity mechanisms ensure correct operations and extensive error handling and the advanced anti-tamper features of the RT-640 protect chips from side-channel and fault injection (FI) attacks. Meanwhile, the Rambus MACsec-IP-160 encrypts and protects data at speeds up to 100 Gbps over Ethernet in-car networks. 

According to CTO NextChip Hweihn Chung, the company is raising the bar for reliable, compact, and affordable ADAS solutions with the Apache6. 

“With Rambus security IP solutions, Apache6 offers state-of-the-art protection of mission-critical data while meeting full ASIL-B compliance,” he adds. 

Conclusion

Rambus continues to make data faster and safer with the launch of key products, industry initiatives, and strategic partnerships. In 2022 and beyond, we are addressing the insatiable demand for more bandwidth in the data center to support new AI/ML applications with our 8.4 Gbps HBM3-Ready Memory SubsystemDDR5 5600 MT/s 2nd-generation RCD chip, and CXL Memory Interconnect Initiative

On the security side, we continue to meet increased demand for a hardware-based security paradigm across multiple verticals, including the IoT and automotive markets. Recent examples include Kyocera selecting the FIPS 140-2 CMVP-certified Rambus RT-130 Root of Trust and AES-IP-38 AES Accelerator for its IoT silicon, and NextChip choosing the Rambus RT-640 Root of Trust and MACsec-IP-160 Protocol Engine to secure its Apache6 automotive processors. As high-profile security exploits, breaches, and counterfeit silicon multiply in 2022, we will see an increasing emphasis placed on a hardware-based security paradigm in both the IoT and automotive spaces. To be sure, we expect a proliferation of dedicated silicon that is specifically designed to protect sensitive cryptographic functions and data.

 

]]>
https://www.rambus.com/blogs/rambus-2021-wrapped/feed/ 0