Data Center Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Tue, 29 Oct 2024 21:50:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Rambus CXL IP Advances Data Center Capabilities in CXL Over Optics Demo https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/ https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/#respond Mon, 05 Aug 2024 15:34:55 +0000 https://www.rambus.com/?post_type=blogs&p=64727 At Rambus, we are committed to pioneering advancements that meet the evolving demands of modern data centers. Today, we are showcasing advanced technology IP for high-speed data center interconnects: CXL 2.0 over optics.

What is CXL?

Compute Express Link (CXL) is an open interconnect standard that enhances communication between processors, memory expansion, and accelerators. Built on the robust PCI Express (PCIe) framework, CXL provides memory coherency between CPU memory and attached devices. This innovation enables efficient resource sharing, reduces software complexity, and lowers system costs—making it an essential component in the future of data center architecture.

Demonstrating the Future: CXL Over Optics

In this demonstration, Olivier Alexandre, Senior Manager of Validation Engineering at Rambus, shows Rambus CXL IP instantiated in an endpoint device connected to a Viavi Xgig 6P4 Exerciser using Samtec Firefly optic cable technology, effectively creating a remote “CXL Memory Expansion” block.

Here are more details on the demo:

Device Setup: Our Device Under Test (DUT), incorporating Rambus CXL 2.0 Controller IP, is configured in CXL 2.0 Type 2, operating at 16 GT/s on four lanes. The Viavi Xgig 6P4 emulates a Root Complex Device, and both devices are linked through a Samtec Firefly PCUO G4 cable, supporting speeds up to 16 GT/s.

Performance Insights: The DUT successfully maintains stability at Gen 4 speed by 4. We also conducted tests at varying speeds, confirming expected performance limits at earlier generations.

Device Discovery: During device discovery, the Rambus CXL IP-enabled device was correctly identified, highlighting device capability and integration.

Compliance Success: Utilizing the Viavi exerciser, we conducted a CXL 2.0 compliance test over a 100-meter fiber optic connection. The test suite, taking approximately 20 minutes, confirmed that our DUT passed all compliance tests.

The Promise of CXL/PCIe Over Optics

This demonstration illustrates the potential of CXL/PCIe over optics as a key solution to meet the bandwidth demands of heterogenous distributed data center architectures. Optical interconnects offer significant advantages including extended reach, reduced latency, and efficient resource sharing across multiple servers.

Learn more about Rambus CXL IP solutions here.

About Samtec and Viavi

Samtec
Known for its high-performance interconnect solutions, Samtec provides leading-edge technology such as the Firefly optic cable, enabling high-speed data transmission with impressive range and low latency.

Viavi
A leader in network testing and measurement, Viavi Solutions offers products like the Xgig 6P4 Exerciser, which is crucial for ensuring compliance and performance in complex network environments.

]]>
https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/feed/ 0
Rambus Unveils PCIe 7.0 IP Portfolio for High-Performance Data Center and AI SoCs https://www.rambus.com/blogs/rambus-unveils-pcie-7-0-ip-portfolio-for-high-performance-data-center-and-ai-socs/ https://www.rambus.com/blogs/rambus-unveils-pcie-7-0-ip-portfolio-for-high-performance-data-center-and-ai-socs/#respond Wed, 12 Jun 2024 12:55:38 +0000 https://www.rambus.com/?post_type=blogs&p=64564 The relentless innovation in Artificial Intelligence (AI) and High-Performance computing (HPC) demands a cutting-edge hardware infrastructure capable of handling unprecedented data loads. To overcome these challenges and usher in a new era of performance, Rambus is proud to announce the launch of our PCI Express® (PCIe®) 7.0 IP portfolio, encompassing a comprehensive suite of IP solutions including:

  • PCIe 7.0 Controller designed to deliver the high bandwidth, low latency, and robust performance required for next-generation AI and HPC applications
  • PCIe 7.0 Retimer for highly-optimized, low-latency data path for signal regeneration
  • PCIe 7.0 Multi-port Switch that is physically aware to support numerous architectures
  • XpressAGENTTM to enable customers to rapidly bring-up first silicon

“The burgeoning landscape of data center chip manufacturers, driven by the emergence of novel data center architectures, necessitates the availability of high-performance interface IP solutions to foster a robust and thriving ecosystem,” said Neeraj Paliwal, SVP & GM of Silicon IP at Rambus. “The Rambus PCIe 7.0 IP portfolio addresses this challenge by delivering unparalleled bandwidth, low latency, and security features. These components work together to provide a seamless, high-performance solution that meets the rigorous demands of AI and HPC applications.”

Rambus PCIe 7.0 Controller IP key features include:

  • Supports PCIe 7.0 specification including 128 GT/s data rate
  • Implementation of low-latency Forward Error Correction (FEC) for link robustness
  • Supports fixed-sized FLITs that enable high-bandwidth efficiency
  • Backward compatible to PCIe 6.0, 5.0, 4.0, etc.
  • State-of-the-art security with an IDE engine
  • Supports AMBA AXI interconnect
PCIe 7.0 Controller IP Block Diagram
PCIe 7.0 Controller IP Block Diagram

Rambus PCIe 7.0 Retimer IP key features include:

  • Supports PCIe 7.0 specification x2 to x16 lanes
  • Pre-integrated Xpress Agent debug analysis IP
  • Highly-configurable equalization algorithms with adaptive behaviors
  • Power modes and intelligent clock gating to best manage controller IP
PCIe 7.0 Retimer IP Block Diagram
PCIe 7.0 Retimer IP Block Diagram

Rambus PCIe 7.0 Switch IP key features include:

  • Highly scalable up to 32 ports configurable external or internal endpoints
  • Physically aware to account for port placements across large die
  • Superior performance through non-blocking architecture
  • Allows seamless migration from FPGA prototyping design to ASIC/SoC production design with the same RTL
PCIe 7.0 Switch Block Diagram
PCIe 7.0 Switch Block Diagram

Rambus PCIe XpressAGENT key features include:

  • Non-intrusive, intelligent, in-IP debug/logic analyzer for PCIe Controller, Retimer and Switch IP enabling rapid first-silicon bring-up
  • Integrates with any PIPE compliant SerDes
  • Provides unified access to PHY, MAC and Link Layers locally or remotely via a CPU-agnostic API
  • Provides pre-emptive monitoring and diagnosis via remote access for infield products

In addition to the PCIe IP portfolio, Rambus also offers industry-leading interface IP for HBM, CXL, GDDR, LPDDR, and MIPI. For more information, visit www.rambus.com/interface-ip.

]]>
https://www.rambus.com/blogs/rambus-unveils-pcie-7-0-ip-portfolio-for-high-performance-data-center-and-ai-socs/feed/ 0
New CXL 3.1 Controller IP for Next-Generation Data Centers https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/ https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/#respond Tue, 23 Jan 2024 22:00:12 +0000 https://www.rambus.com/?post_type=blogs&p=63778 The AI boom is giving rise to profound changes in the data center; compute-intensive workloads are driving an unprecedented demand for low latency, high-bandwidth connectivity between CPUs, accelerators and storage. The Compute Express Link® (CXL®) interconnect offers new ways for data centers to enhance performance and efficiency.

As data centers grapple with increasingly complex AI workloads, the need for efficient communication between various components becomes paramount. CXL addresses this need by providing low-latency, high bandwidth connections that can improve overall memory and system performance.

Data Center Memory Challenges
Three Big Data Center Memory Challenges

CXL 3.1 takes data rates up to 64 GT/s and offers multi-tiered (fabric-attached) switching to allow for highly scalable memory pooling and sharing. These features will be key in the next generation of data centers to mitigate high memory costs and stranded memory resources while delivering increased memory bandwidth and capacity when needed.

“The performance demands of Generative AI and other advanced workloads require new architectural solutions enabled by CXL,” said Neeraj Paliwal, general manager of Silicon IP at Rambus. “The Rambus CXL 3.1 digital controller IP extends our leadership in this key technology delivering the throughput, scalability and security of the latest evolution of the CXL standard for our customers’ cutting-edge chip designs.”

The Rambus CXL 3.1 Controller IP is a flexible design suitable for both ASIC and FPGA implementations. It uses the Rambus PCIe® 6.1 Controller architecture for the CXL.io protocol, and it adds the CXL.cache and CXL.mem protocols specific to CXL. The built-in, zero-latency integrity and data encryption (IDE) module delivers state-of-the-art security against physical attacks on the CXL and PCIe links. The controller can be delivered standalone or integrated with the customer’s choice of CXL 3.1/PCIe 6.1 PHY.

CXL 2.0 Controller Block Diagram
CXL 3.1 Controller Block Diagram

CXL is a key interconnect for data centers and addresses many of the challenges posed by data-intensive workloads. Join Lou Ternullo at our upcoming webinar “Unlocking the Potential of CXL 3.1 and PCIe 6.1 for Next-Generation Data Centers” to learn how CXL and PCIe interconnects can help designers optimize their data center memory infrastructure solutions.

]]>
https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/feed/ 0
New 7200 MT/s RCD Supports Ambitious Server Roadmap https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/ https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/#respond Wed, 27 Dec 2023 22:00:30 +0000 https://www.rambus.com/?post_type=blogs&p=63690 We’re witnessing an unprecedented time for computing. Advanced data center workloads, with Generative AI leading the pack, have set a blistering pace for hardware performance improvements. The platform vendors are responding with the most ambitious server roadmap ever seen. For example, the just introduced 5th Gen Intel® Xeon® Processor came just a year after its predecessor. The 4th Gen Xeon used 4800 MT/s DDR5 memory, 5th Gen pushed performance up with 5600 MT/s DDR5.

The Rambus Gen4 DDR5 RCD boosts the data rate to 7200 MT/s.

The RCD is the key control plane chip on a DDR5 RDIMM

To support that accelerated server roadmap, Rambus, as a leader in cutting-edge memory chip solutions, needs to keep advancing the performance of its Registering Clock Drivers (RCD). The RCD is the key control plane chip on an RDIMM, providing clocks and command/address (C/A) signals to the DRAMs. It’s like a conductor, keeping the symphony of memory operations in sync. Above and beyond that, the C/A signals from the RCD tell each DRAM the location and operation (read or write) for data.

Today marks another important milestone in the DDR5 journey as we announce that we have advanced the performance of our DDR5 RCD to 7200 MT/s. With a 50% increase in data rate and bandwidth over current production 4800 MT/s solutions, the Rambus 7200 MT/s DDR5 RCD enables a new level of main memory performance for data center servers. Delivering industry-leading latency and power, it offers optimized timing parameters for improved RDIMM margins.

The Rambus RCD is the flagship of our DDR5 memory interface chipset, built on over 30 years of high-performance memory experience and our company’s renowned signal integrity (SI) / power integrity (PI) expertise. The chipset also includes Serial Presence Detect (SPD) Hub and Temperature Sensors, two more key components for server systems. The SPD Hub and Temperature Sensors improve DDR5 DIMM system and thermal management in order to achieve higher performance levels within the desired power envelope.

The demands on data center servers will continue their rapid rise, and memory is a critical enabler of greater server performance. As a leader in memory interface chips, customers can count on Rambus to deliver state-of-the-art solutions ahead of the market need as with our new 7200 MT/s RCD announced here.

]]>
https://www.rambus.com/blogs/new-7200-mts-rcd-supports-ambitious-server-roadmap/feed/ 0
CXL 3.1: What’s Next for CXL-based Memory in the Data Center https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/ https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/#respond Tue, 14 Nov 2023 16:44:40 +0000 https://www.rambus.com/?post_type=blogs&p=63435 Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory sharing between accelerators. When deployed, these improvements will boost the performance of AI and other demanding compute workloads.

Supercomputing 2023 (SC23), going on this week in Denver, provided the perfect backdrop for announcing this latest advancement in the CXL standard. At SC23, the Consortium is hosting demos from 16 ecosystem partners at the CXL pavilion (Booth #1301) including Rambus. There, we’re demonstrating the newly-announced Rambus CXL Platform Development Kit (PDK) performing memory tiering operations.

Data Center Memory Challenges
Three Big Data Center Memory Challenges

CXL memory tiering tackles one of the biggest hurdles for data center computing: namely the three order of magnitude latency gap that exists between direct-attached DRAM and Solid-State Drive (SSD) storage. When a processor runs out of capacity in DRAM main memory, it must go to SSD, which leaves the processor waiting. That waiting, or latency, has a dramatic negative impact on computing performance.

Two other big problems arise from 1) the rapid scaling of core counts in multi-core processors and 2) the move to accelerated computing with purpose-built GPUs, DPUs, etc. Core counts are rising far faster than main memory channels, with the upshot being that cores past a certain number are starved for memory bandwidth, sub-optimizing the benefit of those additional cores. With accelerated computing, a growing number of processors and accelerators have direct-attached memory to provide higher performance, but more independent pockets of memory leads to a higher probability of memory resources being underutilized or stranded.

Bridging the Latency Gap with CXL
CXL Memory Tiering Can Bridge the Latency Gap

CXL Memory Tiering Can Bridge the Latency Gap

CXL promises to address all three of these challenges. CXL enables new memory tiers which bridge the latency gap between direct-attached memory and SSD, and provide greater memory bandwidth to unlock the power of multi-core processors. In addition, CXL allows memory resources to be shared among processors and accelerators addressing the stranded memory challenge.

Rambus CXL PDK Add-in Card
Rambus CXL PDK Add-in Card

Delivering on the promise of CXL requires a great deal of co-design of solutions across the ecosystem spanning chips, IP, devices, systems and software. That’s where tools like the Rambus CXL PDK can make a major contribution. The PDK enables module and system makers to prototype and test CXL-based memory expansion and pooling solutions for AI infrastructure and other advanced systems. Interoperable with CXL 1.1 and CXL 2.0 capable processors, and memory from all the major memory suppliers, it leverages today’s available hardware to accelerate the development of the full stack of CXL-based solutions slated for deployment in a few years’ time.

For more information about the Rambus CXL PDK check out the Rambus CXL Memory Initiative here.

 

]]>
https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/feed/ 0
Even More CXL Webinar Q&A! https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/ https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/#respond Thu, 08 Dec 2022 20:37:31 +0000 https://www.rambus.com/?post_type=blogs&p=62158 We had such a great response to last week’s More CXL Webinar Q&A blog that we decided to reprise the questions answered live in our webinar How CXL Technology will Revolutionize the Data Center (available on-demand). Hope this provides more insights on the capabilities of Compute Express Link™ (CXL™) technology.

Click on a link below to jump to a specific question:

  1. There have been multiple interconnect standards proposed in the past: CXL, Gen-Z, OpenCAPI and CCIX. Is CXL going to be the one that finally gets adopted?
  2. What is memory coherence and why is it important?
  3. Will CXL be a positive or negative driver of overall memory capacity per server going forward given it provides a more efficient means of memory use?
  4. What are the long-term implications for the RDIMM market given CXL can now be used to attach DRAM to the CPU?
  5. How is compliance handled in CXL? How do you know if a device is “CXL-compliant”?
  6. For the first wave of CXL products that introduce memory expansion, what’s the biggest demand from customers…more bandwidth or more capacity?

1. There have been multiple interconnect standards proposed in the past: CXL, Gen-Z, OpenCAPI and CCIX. Is CXL going to be the one that finally gets adopted?

There is wide industry support for CXL, and the market leading platform makers Intel® and AMD®, and all the leading CSPs, are supporting it. CXL has the level of industry support you must achieve if a new technology with as broad a reach as CXL is to be successful. Gen-Z as well as OpenCAPI assets are now combined with CXL. This further strengthens CXL by allowing the best parts of all these standards to be combined over time under the CXL umbrella. Also, CXL adopts the PCI Express® (PCIe®) physical layer, so it benefits from the ubiquitous deployment of PCIe in the data center.

2. What is memory coherence and why is it important?

Coherence means that when two processors share a piece of data, they can be guaranteed that if one processor updates the data, that the other processor will have access to those updates. Hardware guarantees this, and this makes the task of programming much, much easier – helping programmers to develop more complex applications that require cores to share data more easily.

Now in the context of CXL – CXL provides for the same memory coherence mechanisms that are used today with native DRAM. Furthermore, there are some additional hardware coherency support capabilities in later versions of the specification.

3. Will CXL be a positive or negative driver of overall memory capacity per server going forward given it provides a more efficient means of memory use?

In fact, we think CXL will end up increasing memory demand. Fundamentally the reason for CXL to come into existence is the need for more memory. But as you point out it does also allow for more memory efficiency.

New technology to improve efficiency is a natural evolution in the data center. We can look back at a couple of similar situations over the past 20 years and see how they played out.

Back when multi-core CPUs were introduced, there was a concern that fewer CPUs would be sold. But in reality, multi-core CPUs opened up new capabilities that software developers took advantage of to build new applications. In the end, there was more demand for multi-core CPUs, and the market grew.

Virtualization of the server was thought likely to drive a reduction in capex on server equipment, because servers were becoming more efficient. Instead, the market grew as business models like Infrastructure as a Service and Platform as a Service business models thrived.

What we’ve seen over the years is that if you give software developers access to more tools, they create new workloads. Business leaders develop new product offerings. All of this drives a virtuous cycle of greater hardware market growth.

The industry as a whole is just scratching the surface for AI/ML use cases. CXL will enable new memory capacity, bandwidth, and tiering which will drive continued significant investment in memory infrastructure to enable these AI/ML workloads.

So back to efficiency – making memory more efficient is incredibly important because companies and cloud service providers are investing so much in memory to satisfy their insatiable need for more bandwidth and capacity. Capex dollars will continue to be spent on new memory, and CXL will help companies get much greater utility out of that spend.

4. What are the long-term implications for the RDIMM market given CXL can now be used to attach DRAM to the CPU?

Well, the RDIMM market is here to stay. Although CXL provides new options for memory tiering, data center operators will continue to want to have a significant amount of memory as close to the processing cores as possible with the lowest possible latency. RDIMMs address this need very well and will continue to address this tier of the memory hierarchy.

In addition, as we showed during the presentation, there are many use cases for CXL-attached memory modules which use RDIMMs. Cloud Service Providers have long relationships buying RDIMMs, they know how to use them, and they get an easily serviceable module in the data center. Therefore, we believe the market for RDIMMs will increase as a result of CXL.

5. How is compliance handled in CXL? How do you know if a device is “CXL-compliant”?

A: The CXL Consortium is in development of a compliance program very similar to that of the PCI-SIG. We should expect the first CXL 1.1-compliant devices to appear on an integrators list next year (2023).

6. For the first wave of CXL products that introduce memory expansion, what’s the biggest demand from customers…more bandwidth or more capacity?

Great question and one that I get a lot. The simple answer is that it is really workload dependent. But the good news is that CXL can deliver both bandwidth and capacity.

Cores are Underserviced by Available Memory Bandwidth
Cores are Underserviced by Available Memory Bandwidth. Source.

If we go back to that graph (see above) I showed earlier in the webinar – where we hit a wall with the number of cores that we can serve with direct memory channels – that’s first order a bandwidth problem. But as soon as you unlock cores to do more work, generally speaking, those cores are going to require access to more memory capacity – whether it be more hot memory, warm memory, or cold memory. This new capacity and these new memory tiers are also enabled by the same CXL devices that can deliver the additional bandwidth.

There are workloads that are biased either to more bandwidth or to more capacity. For example, web search and content provider use cases would trade additional bandwidth for additional capacity. While in many AI/ML inference models or in memory databases, capacity is extremely important. So, it really is a mix of needs dependent on the workload.

The beauty of CXL is that it provides for a wide variety of memory tiers that can be adopted in the right combination to best serve the needs of a given workload while delivering lowest overall Total Cost of Ownership. When overlayed with software, like that which will come out of initiatives like the recently announced OCP Composable Memory System workgroup, the possibilities are very exciting.

If you have any questions regarding CXL technology or Rambus products, feel free to ask them here.

]]>
https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/feed/ 0
Boosting Data Center Performance to the Next Level with PCIe 6.0 & CXL 3.0 https://www.rambus.com/blogs/boosting-data-center-performance-to-the-next-level-with-pcie-6-0-cxl-3-0/ https://www.rambus.com/blogs/boosting-data-center-performance-to-the-next-level-with-pcie-6-0-cxl-3-0/#respond Mon, 24 Oct 2022 21:00:36 +0000 https://www.rambus.com/?post_type=blogs&p=62070 2022 has seen major updates to two standards critical to the future evolution of the data center: PCI Express® (PCIe®) and Compute Express Link™ (CXL™). The two are interwoven, and in this blog, we’ll look at their relationship and the impact of latest developments.

Like many standards in the computing world, PCIe has proliferated far beyond its original remit. Over the past two decades, it has become not just the de facto standard for computing connectivity, it has also expanded into new applications, such as IoT, automotive, government, and many more. With its most recent update to PCIe 6.0, it is poised to take data center performance to the next level.

PCIe 6.0 boosts signaling rates to 64 gigatransfers per second (GT/s), twice that of PCIe 5.0. Initial designs incorporating PCIe 6.0 will be where bandwidth demands are most intense right now: in the heart of the data center. For bandwidth-hungry, data-intensive workloads, the extra bandwidth offered by PCIe 6.0 will certainly be a game changer!

CXL, first introduced in 2019, adopted the ubiquitous PCIe standard for its physical layer protocol (CXL.io). At that time, PCIe 5.0 was the latest standard, and CXL 1.0, 1.1 and the subsequent 2.0 generation all used PCIe 5.0’s 32 GT/s signaling.

In August 2022, CXL 3.0 was released, adopting the PCIe 6.0 physical interface. This new version of the CXL specification introduced new features such that promise to increase data center performance and scalability, while reducing the total cost of ownership (TCO). CXL 3.0, like PCIe 6.0, uses PAM4 to boost signaling rates to 64 GT/s with no additional latency.

Beyond this, it offers multi-tiered switching and switch-based fabrics, along with improved memory sharing and pooling capabilities. Combined, these three key features enable new use models and increased flexibility in data center architectures. This facilitates the move to distributed, composable architectures and higher performance levels for AI/ML and other compute-intensive or memory-intensive workloads.

For SoC designers, the number of signal integrity and power integrity (SI/PI) issues compound as data rates rise. Designing for 64 GT/s operation can be exceedingly tricky. Rambus has over 30 years of renowned leadership in SI/PI and has helped chip makers successfully implement hundreds of PCIe and CXL designs. With today’s announcement of a PHY that supports both PCIe 6.0 and CXL 3.0, we offer an easy to integrate solution that will help you take your chip design to the next level of performance.

]]>
https://www.rambus.com/blogs/boosting-data-center-performance-to-the-next-level-with-pcie-6-0-cxl-3-0/feed/ 0
CXL™ 3.0 Turns Up Scalability to 11 https://www.rambus.com/blogs/cxl-3-0-turns-up-scalability-to-11/ https://www.rambus.com/blogs/cxl-3-0-turns-up-scalability-to-11/#respond Tue, 02 Aug 2022 15:58:36 +0000 https://www.rambus.com/?post_type=blogs&p=61765 The CXL™ Consortium (of which Rambus is a member) has now released the 3.0 specification of the Compute Express Link™ (CXL) standard. CXL 3.0 introduces compelling new features that promise to increase data center performance, scalability and TCO. CXL has evolved rapidly from its introduction in 2019. The 1.0/1.1 specification enabled prototyping of CXL solutions. With 2.0 and the introduction of memory pooling, CXL reached the deployment phase. Now with CXL 3.0, we have capabilities that will power the scaling phase.

So, what’s new in CXL 3.0? Well, first up there’s a step function increase in data rate. CXL 1.x and 2.0 use the PCI Express® (PCIe®) 5.0 electricals for their physical layer: NRZ signaling at 32 Gigatransfers per second (GT/s). CXL 3.0 keeps that same philosophy of building on broadly adopted PCIe technology and extends it to the latest 6.0 version of the PCIe standard released earlier this year. That boosts CXL 3.0 data rates to 64 GT/s using PAM4 signaling.

A second big addition with CXL 3.0 is multi-tiered switching which enables the implementation of switch fabrics. CXL 2.0 allowed for a single layer of switching. CXL 2.0 switches can connect to upstream hosts and downstream devices, but not other switches, and the scale is limited to the available ports on a switch. With CXL 3.0, switch fabrics are enabled, where switches can connect to other switches, vastly increasing the scaling possibilities.

Among additional features, CXL 3.0 introduces peer-to-peer direct memory access and enhancements to memory pooling where multiple hosts can coherently share a memory space on a CXL 3.0 device. These features enable new use models and increased flexibility in data center architectures. Taken together with 64 GT/s signaling and fabric switching, CXL 3.0 puts us on the road for composable server systems which optimize performance and TCO.

CXL is a-once-in-a-decade technological force that will transform data center architectures. Supported by a who’s who of industry players including hyperscalers, system OEMs, platform and module makers, chip makers and IP providers, its rapid development is a reflection of the tremendous value it can deliver. Rambus is proud to be a member of the CXL Consortium and to provide chip and IP solutions that will shape the data center of the future.

]]>
https://www.rambus.com/blogs/cxl-3-0-turns-up-scalability-to-11/feed/ 0
Rambus Design Summit Interview Series: Steven Woo https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/ https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/#respond Mon, 18 Jul 2022 17:54:06 +0000 https://www.rambus.com/?post_type=blogs&p=61715 Rambus Fellow, Steven Woo, returns to the Rambus Design Summit stage tomorrow, and we are so excited for his keynote: Advancing Computing in the Accelerator Age! In our last interview before the show, we met with Steven to chat about his background, CXL, and some of the biggest challenges for computing in the years ahead.

Read on for Steven’s full interview and don’t forget to register for Rambus Design Summit, happening tomorrow!

Register for Rambus Design Summit!

Question: Can you tell us a bit about your background?
Steven: My background is in computer architecture, and I’ve done research work in multiprocessor architectures, parallel programming, and neural networks. I’ve always been interested in improving the performance of computer systems, and memory systems are critical to faster computing. I’ve led and worked on several projects here at Rambus pushing DRAM and memory performance in PCs and servers, domain-specific architectures for applications like machine learning, and advanced architectures for near-data processing.

Question: What are you working on at Rambus these days?
Steven:I’m currently working in Rambus Labs, the research organization within Rambus, where I lead a team of senior architects chartered with developing innovations for future DRAMs and memory systems. We get to work on longer-term research projects as well as with our business units on nearer-term programs. There are a lot of interesting challenges for future memory systems, and we’re working on solutions that apply to data centers, mobile computing, and high-performance systems.>

Question: CXL is such an exciting emerging technology – how do you see that impacting the future of data center architecture?
Steven: CXL is one of the most disruptive technologies that’s happened over the last 20 years. It will support emerging datacenter usage models by providing a cache-coherent interconnect for processors and accelerators, we well as memory expansion for applications that process large amounts of data. CXL will ultimately enable higher performance and improved resource sharing, reducing overall cost of ownership.

Question: What do you think are the biggest challenges for computing in the years ahead?
Steven: As the world’s digital data continues to increase, new innovations are needed so that processing can keep up.  With performance increasingly limited by data movement, the industry must focus on faster and more power-efficient interconnects and memory systems. Applications and usage models are changing, so system architectures must continue to evolve as well. Accelerators offer new ways to process data more quickly, and resource disaggregation enables higher resource utilization and improved cost of ownership that will influence the direction of computing architectures in the coming years.

]]>
https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/feed/ 0
CXL™ Fabric Manager Advances Next-Gen Data Centers https://www.rambus.com/blogs/cxl-fabric-manager-advances-next-gen-data-centers/ https://www.rambus.com/blogs/cxl-fabric-manager-advances-next-gen-data-centers/#respond Sun, 20 Mar 2022 22:43:30 +0000 https://www.rambus.com/?post_type=blogs&p=61371 The Compute Express Link™ (CXL) Consortium has invited Vincent Haché, Director of Systems Architecture at Rambus, to present a webinar on CXL Fabric Management on March 22nd.

The CXL 2.0 specification introduces a standardized fabric manager for inventory and resource allocation to enable easier adoption and management of CXL-based switch and fabric solutions.

The Fabric Management webinar will deliver an architectural overview of the CXL 2.0 management framework and explore how it addresses the requirements of enterprise and data center deployments. It will then introduce the CXL Fabric Manager (FM), describing its function and responsibilities, and provide a detailed description of the Component Command Interface (CCI), transport protocols, background operations, and categories of commands, including Management Command Sets. The presentation will conclude with a walk-through of Multi-Logical Device (MLD) management.

The CXL Consortium is an open industry-standards group formed to develop technical specifications that facilitate breakthrough performance for emerging usage models while supporting an open ecosystem for data center accelerators and other high-speed enhancements.

This webinar is the latest in a series of presentations illustrating CXL Consortium member company Rambus’ ongoing commitment to the CXL Memory Interconnect Initiative; a research program focused on improving performance, efficiency, and cost for a new era of data center architecture.

Announced on June 16, 2021, this initiative is the newest chapter in Rambus’ 30+ year history of advancing the leading edge of computing performance. It leverages the company’s expertise in memory and SerDes subsystems, semiconductor and network security, high-volume memory interface chips, and compute system architectures to develop breakthrough interconnect solutions for the future data center.

Realization of the chip solutions needed for memory expansion and pooling use cases, as well server disaggregation and composability will require the synthesis of a number of critical technologies. Rambus has been researching memory solutions for disaggregated architectures for close to a decade and leverages a system-aware design approach to solve next-generation challenges.

Details of the webinar can be found below:
Title: Introduction to the Compute Express Link™ (CXL™) Fabric Manager
Presenter: Vincent Haché, Rambus
Date/Time: March 22 @ 10am PT
Registration Link

About Vincent Haché

Vincent Haché is the Director of the Systems Architecture team in Rambus’s Interconnect BU, overseeing the HW and FW architectures for all CXL datacenter products. An experienced systems architect, Haché is responsible for defining CXL device architecture and driving adoption of PCIe, NVMe, and CXL in the data center for storage, HPC, cloud gaming, IaaS, and AI / Machine Learning.

]]>
https://www.rambus.com/blogs/cxl-fabric-manager-advances-next-gen-data-centers/feed/ 0