CXL Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Fri, 31 Oct 2025 19:23:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Rambus CXL IP: A Journey from Spec to Compliance https://www.rambus.com/blogs/rambus-cxl-ip-a-journey-from-spec-to-compliance/ https://www.rambus.com/blogs/rambus-cxl-ip-a-journey-from-spec-to-compliance/#respond Mon, 07 Apr 2025 16:50:17 +0000 https://www.rambus.com/?post_type=blogs&p=64477 [Updated April 7, 2025] With the ongoing efforts of the Rambus engineering team, we have now achieved compliance to CXL 2.0 with our CXL Controller IP, and it has been added to the Integrators List.

Company Name Product Name Device ID Device Type Feature Set Spec Revision PHY Speed Max Lane Form Factor Function Compliance Event (CTE) Approved
Rambus PCIe5/CXL2 Controller IP 1115 Type 3 CXL Core 2.0 CXL 2.0 16GT/s x8 CEM IP CTE 007

We’ll keep you posted on future progress as we demonstrate via the compliance process Rambus products that deliver the latest features and benefits of the CXL specification.

___________________________________________

Driven by our unwavering commitment to quality and performance, a Rambus team of engineers, validation experts, and architects have been taking part in CXL® Compliance Test Events to ensure the flawless performance and market readiness of our CXL Controller IP. We are pleased to report that our CXL 2.0 Controller IP has gained compliance in CXL 1.1 and has been added to the Integrators List.

CXL Compliance Program

The CXL Compliance Program provides member companies with opportunities to test the functionality and interoperability of end products as defined in the CXL specification.

Structured into distinct phases—Pre-FYI (For Your Information), FYI Phase, and General Testing—the CXL Compliance workshops provided us with a comprehensive framework for assessing and validating our CXL Controller IP.  We leverage our team’s experience to implement the CXL Controller IP in FPGAs as a means to enable interoperability and protocol compliance with other CXL hardware solutions in the ecosystem.

Status of CXL Spec Compliance Phases (as of May 2024)
Status of CXL Spec Compliance Phases (as of May 2024)

Four Tests to Compliance

The workshops involved validating four types of tests to claim compliance, ensuring our CXL IP met CXL standards for reliability and performance across various parameters, including interoperability, protocol adherence, and electrical compliance.

  1. Interoperability tests involve establishing connections with other equipment present at the event.Interoperability tests involve establishing connections with other equipment present at the event.
  2. The CXL Validation Tests (CXL CV) involve verifying the connection, booting via the BIOS, OS enumeration, and executing the CXL validation software application on a “golden” host provided by the CXL Consortium. The CXL Validation Tests (CXL CV) involve verifying the connection, booting via the BIOS, OS enumeration, and executing the CXL validation software application on a "golden" host provided by the consortium.
  3. Tests on exerciser, which establish a CXL-specific test sequence to verify capabilities, registers, and device responses. . Protocol tests on exerciser, which establish a CXL-specific test sequence to verify capabilities, registers, and device responses.
  4. Electrical tests which allow validation of CXL compliance for speeds of 8 GT/s, 16 GT/s, or 32 GT/s, like PCIe®.
    Electrical tests which allow validation of CXL compliance for speeds of 8GT/s, 16GT/s, or 32GT/s, like the PCIe protocol.

After completing these tests, the Rambus CXL IP obtained compliance at a speed of 16 GT/s.

Rambus CXL 1.1 Controller IP on the Integrators List
Rambus CXL Controller IP on the Integrators List

Benefits of Participation in CXL Compliance Test Events

Participation in CXL Compliance Test Events yielded numerous benefits, including enhanced CXL product quality, performance, and compatibility. Insights gained from these workshops enabled us to improve interoperability results with other CXL devices and hosts in the CXL ecosystem.

Achieving compliance for our CXL Controller IP underscores several key advantages of our solution for customers:

  • Cross-compatibility: Customers implementing a CXL controller in their ASIC design can leverage our solution’s seamless transition from FPGA to ASIC. The identical codebase ensures consistency and facilitates testing and validation in an FPGA environment before ASIC implementation.
  • Accelerated Validation: By utilizing our FPGA-compatible IP for prototyping, ASIC clients can expedite validation and bring-up phases.
  • Comprehensive Support: We stand by our clients throughout the development journey, offering expertise and guidance from prototyping to final ASIC implementation.

At Rambus, our dedication extends beyond delivering cutting-edge IP; we prioritize empowering our clients with the tools and support needed to succeed in the rapidly evolving landscape of high-speed interconnects.

Stay tuned for future updates on our CXL compliance journey. Thanks to FPGA implementation efforts, our CXL 2.0 Controller IP is fully compliant to CXL 1.1 and waiting for the CXL 2.0 general testing phase to officially begin.

For more information, visit the Rambus CXL Controller IP page or contact us here.

]]>
https://www.rambus.com/blogs/rambus-cxl-ip-a-journey-from-spec-to-compliance/feed/ 0
Rambus, VIAVI and Samtec Demonstrate CXL® over Optics PoC at Upcoming SC24 https://www.rambus.com/blogs/rambus-viavi-and-samtec-demonstrate-cxl-over-optics-poc-at-upcoming-sc24/ https://www.rambus.com/blogs/rambus-viavi-and-samtec-demonstrate-cxl-over-optics-poc-at-upcoming-sc24/#respond Thu, 14 Nov 2024 21:46:56 +0000 https://www.rambus.com/?post_type=blogs&p=65141 The disruption of GenAI over the last few years has forced system architects and hardware designers to rethink data center topologies. While AI model sizes and compute capability are growing exponentially, I/O throughput and memory access are growing linearly. These trends create an unsustainable gap, and it needs to be addressed across the stack starting from the physical layer at the chip level all the way to the network layer.

New external cabling solutions enable ever-changing data center topologies. Rack scale connectivity, as an example, will define next-generation architectures over long reach cables.  Copper will work for a couple of meters, but optical solutions are needed for a rack-to-rack use case with a cable length of 7 meters and for cable lengths exceeding 10 meters for larger clustering use cases.

What is CXL?

CXL is a breakthrough high-speed CPU-to-Device and CPU-to-Memory interconnect designed to accelerate next-generation data center topologies.

CXL is an open industry standard offering high-bandwidth low-latency connectivity between the host processor and devices such as accelerators, memory controller/expander, and smart I/O devices for heterogeneous computing and disaggregation use cases.

The CXL® Consortium is an open industry standard group formed to develop technical specifications that facilitate breakthrough performance for emerging usage models while supporting an open ecosystem for data center accelerators and other high-speed enhancements. The CXL Consortium represents a wide range of industry expertise including leading cloud service providers, communications OEMs, IP/silicon/device providers and system OEMs.

Rambus CXL Controller IP

Rambus high-performance CXL controller IP is optimized for use in SoCs, ASICs and FPGAs. These industry-leading solutions for high-performance interfaces address AI/ML, data center and edge applications.

The Rambus CXL Controller IP leverages a silicon-proven PCIe controller architecture for the CXL.io path and adds CXL.cache and CXL.mem paths specific to the CXL standard. The controller IP exposes a native Tx/Rx user interface for CXL.io traffic as well as an Intel CXL-cache/mem Protocol Interface (CPI) for CXL.mem and CXL.cache traffic.

The provided Graphical User Interface (GUI) Wizard allows designers to tailor the IP to their exact requirements, by enabling, disabling, and adjusting a vast array of parameters, including CXL device type, PIPE interface configuration, buffer sizes and latency, low power support, SR-IOV parameters, etc. for optimal throughput, latency, size and power.

The controller IP can be delivered standalone or integrated with the customer’s choice of CXL/PCIe compliant SerDes. It can also be provided with example reference designs for integration with FPGA SerDes.

VIAVI CXL Products

VIAVI Xgig Analyzer solutions for PCIe 5.0/6.0 support PCIe/CXL.io and CXL.cache/memory transactions with advanced trigger and filter templates that enable faster debugging and root cause analysis. The Xgig captures valuable real-time metrics and performs detailed analytics across multiple protocols.

VIAVI Xgig Exerciser solutions also support CXL compliance and traffic generation.

VIAVI PCIe interposers, such as the Xgig PCIe 16-lane CEM Interposer, can be used to capture CXL traffic running on a PCIe physical layer. The interposer creates a bi-directional interface between the protocol analyzer and system under test.

Samtec CXL Over Optics Technology

Samtec’s FireFly™ Micro Flyover System™ embedded and rugged optical transceivers take data connection via optical cable at greater distances or copper for cost optimization. FireFly is the first interconnect system that gives a designer the flexibility of using optical and copper interconnects interchangeably with the same connector system.

Samtec’s PCUO series supports PCIe and CXL protocols via patented FireFly optical transceiver in x4, x8 and x16 configurations at PCIe 4.0/16 Gbps data rates.  PCIe 5.0/32 Gbps and PCIe 6.0/64 Gbps PAM4 data rates are also under development. Additionally, Samtec also offers a growing family of optically-enables industry standard PCB form factors (PCIe CEM AIC, OCP NIC 3.0, OCP OAI EXP, EDSFF E3.x 2T, etc.) for easy-to-use optical connectivity.

For More Information
For more information about VIAVI’s CXL product portfolio, please visit www.viavisolutions.com/cxl.
For more information about the Rambus CXL product portfolio, please visit www.rambus.com/cxl.
For more information about the Samtec CXL product portfolio, please visit www.samtec.com/cxl-interconnect.

]]>
https://www.rambus.com/blogs/rambus-viavi-and-samtec-demonstrate-cxl-over-optics-poc-at-upcoming-sc24/feed/ 0
Rambus CXL IP Advances Data Center Capabilities in CXL Over Optics Demo https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/ https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/#respond Mon, 05 Aug 2024 15:34:55 +0000 https://www.rambus.com/?post_type=blogs&p=64727 At Rambus, we are committed to pioneering advancements that meet the evolving demands of modern data centers. Today, we are showcasing advanced technology IP for high-speed data center interconnects: CXL 2.0 over optics.

What is CXL?

Compute Express Link (CXL) is an open interconnect standard that enhances communication between processors, memory expansion, and accelerators. Built on the robust PCI Express (PCIe) framework, CXL provides memory coherency between CPU memory and attached devices. This innovation enables efficient resource sharing, reduces software complexity, and lowers system costs—making it an essential component in the future of data center architecture.

Demonstrating the Future: CXL Over Optics

In this demonstration, Olivier Alexandre, Senior Manager of Validation Engineering at Rambus, shows Rambus CXL IP instantiated in an endpoint device connected to a Viavi Xgig 6P4 Exerciser using Samtec Firefly optic cable technology, effectively creating a remote “CXL Memory Expansion” block.

Here are more details on the demo:

Device Setup: Our Device Under Test (DUT), incorporating Rambus CXL 2.0 Controller IP, is configured in CXL 2.0 Type 2, operating at 16 GT/s on four lanes. The Viavi Xgig 6P4 emulates a Root Complex Device, and both devices are linked through a Samtec Firefly PCUO G4 cable, supporting speeds up to 16 GT/s.

Performance Insights: The DUT successfully maintains stability at Gen 4 speed by 4. We also conducted tests at varying speeds, confirming expected performance limits at earlier generations.

Device Discovery: During device discovery, the Rambus CXL IP-enabled device was correctly identified, highlighting device capability and integration.

Compliance Success: Utilizing the Viavi exerciser, we conducted a CXL 2.0 compliance test over a 100-meter fiber optic connection. The test suite, taking approximately 20 minutes, confirmed that our DUT passed all compliance tests.

The Promise of CXL/PCIe Over Optics

This demonstration illustrates the potential of CXL/PCIe over optics as a key solution to meet the bandwidth demands of heterogenous distributed data center architectures. Optical interconnects offer significant advantages including extended reach, reduced latency, and efficient resource sharing across multiple servers.

Learn more about Rambus CXL IP solutions here.

About Samtec and Viavi

Samtec
Known for its high-performance interconnect solutions, Samtec provides leading-edge technology such as the Firefly optic cable, enabling high-speed data transmission with impressive range and low latency.

Viavi
A leader in network testing and measurement, Viavi Solutions offers products like the Xgig 6P4 Exerciser, which is crucial for ensuring compliance and performance in complex network environments.

]]>
https://www.rambus.com/blogs/rambus-demonstrates-advancing-data-center-capabilities-with-cxl-over-optics/feed/ 0
New CXL 3.1 Controller IP for Next-Generation Data Centers https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/ https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/#respond Tue, 23 Jan 2024 22:00:12 +0000 https://www.rambus.com/?post_type=blogs&p=63778 The AI boom is giving rise to profound changes in the data center; compute-intensive workloads are driving an unprecedented demand for low latency, high-bandwidth connectivity between CPUs, accelerators and storage. The Compute Express Link® (CXL®) interconnect offers new ways for data centers to enhance performance and efficiency.

As data centers grapple with increasingly complex AI workloads, the need for efficient communication between various components becomes paramount. CXL addresses this need by providing low-latency, high bandwidth connections that can improve overall memory and system performance.

Data Center Memory Challenges
Three Big Data Center Memory Challenges

CXL 3.1 takes data rates up to 64 GT/s and offers multi-tiered (fabric-attached) switching to allow for highly scalable memory pooling and sharing. These features will be key in the next generation of data centers to mitigate high memory costs and stranded memory resources while delivering increased memory bandwidth and capacity when needed.

“The performance demands of Generative AI and other advanced workloads require new architectural solutions enabled by CXL,” said Neeraj Paliwal, general manager of Silicon IP at Rambus. “The Rambus CXL 3.1 digital controller IP extends our leadership in this key technology delivering the throughput, scalability and security of the latest evolution of the CXL standard for our customers’ cutting-edge chip designs.”

The Rambus CXL 3.1 Controller IP is a flexible design suitable for both ASIC and FPGA implementations. It uses the Rambus PCIe® 6.1 Controller architecture for the CXL.io protocol, and it adds the CXL.cache and CXL.mem protocols specific to CXL. The built-in, zero-latency integrity and data encryption (IDE) module delivers state-of-the-art security against physical attacks on the CXL and PCIe links. The controller can be delivered standalone or integrated with the customer’s choice of CXL 3.1/PCIe 6.1 PHY.

CXL 2.0 Controller Block Diagram
CXL 3.1 Controller Block Diagram

CXL is a key interconnect for data centers and addresses many of the challenges posed by data-intensive workloads. Join Lou Ternullo at our upcoming webinar “Unlocking the Potential of CXL 3.1 and PCIe 6.1 for Next-Generation Data Centers” to learn how CXL and PCIe interconnects can help designers optimize their data center memory infrastructure solutions.

]]>
https://www.rambus.com/blogs/new-cxl-3-1-controller-ip-for-next-generation-data-centers/feed/ 0
Compute Express Link (CXL): All you need to know https://www.rambus.com/blogs/compute-express-link/ https://www.rambus.com/blogs/compute-express-link/#respond Tue, 23 Jan 2024 18:00:04 +0000 https://www.rambus.com/?post_type=blogs&p=60992 [Last updated on: January 23, 2024] In this blog post, we take an in-depth look at Compute Express Link®™ (CXL®™), an open standard cache-coherent interconnect between processors and accelerators, smart NICs, and memory devices.

  • We explore how CXL can help data centers more efficiently handle the tremendous memory performance demands of generative AI and other advanced workloads.
  • We discuss how CXL technology maintains memory coherency between the CPU memory space and memory on attached devices to enable resource sharing (or pooling).
  • We also detail how CXL builds upon the physical and electrical interfaces of PCI Express® (PCIe®) with protocols that establish coherency, simplify the software stack, and maintain compatibility with existing standards.
  • Lastly, we review Rambus CXL solutions, which include the Rambus CXL 3.1 Controller. This IP comes with integrated Integrity and Data Encryption (IDE) modules to monitor and protect against cyber and physical attacks on CXL and PCIe links.

Table of Contents

1. Industry Landscape: Why is CXL needed?

Data centers face three major memory challenges as roadblocks to greater performance and lower total cost of ownership (TCO). The first of these is the limitations of the current server memory hierarchy. There is a three-order of magnitude latency gap that exists between direct-attached DRAM and Solid-State Drive (SSD) storage. When a processor runs out of capacity in direct-attached memory, it must go to SSD, which leaves the processor waiting. That waiting, or latency, has a dramatic negative impact on computing performance.

Secondly, core counts in multi-core processors are scaling far faster than main memory channels. This translates to processor cores beyond a certain number being starved for memory bandwidth, sub-optimizing the benefit of additional cores.

Finally, with the increasing move to accelerated computing, wherein accelerators have their own directed attached memory, there is the growing problem of underutilized or stranded memory resources.

Keep on reading:
PCIe 6.1 – All you need to know
CXL Memory Initiative: Enabling a New Era of Data Center Architecture

The solution to these data center memory challenges is a complimentary, pin-efficient memory technology that can provide more bandwidth and capacity to processors in a flexible manner. Compute Express Link (CXL) is the broadly supported industry standard solution that has been developed to provide low-latency, memory cache coherent links between processors, accelerators and memory devices.

2. An Introduction to CXL: What is Compute Express Link?

CXL is an open standard industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. Essentially, CXL technology maintains memory coherency between the CPU memory space and memory on attached devices. This enables resource sharing (or pooling) for higher performance, reduces software stack complexity, and lowers overall system cost. The CXL Consortium has identified three primary classes of devices that will employ the new interconnect:

      • Type 1 Devices: Accelerators such as smart NICs typically lack local memory. Via CXL, these devices can communicate with the host processor’s DDR memory.
      • Type 2 Devices: GPUs, ASICs, and FPGAs are all equipped with DDR or HBM memory and can use CXL to make the host processor’s memory locally available to the accelerator—and the accelerator’s memory locally available to the CPU. They are also co-located in the same cache coherent domain and help boost heterogeneous workloads.
      • Type 3 Devices: Memory devices can be attached via CXL to provide additional bandwidth and capacity to host processors. The type of memory is independent of the host’s main memory.

3. What Is the CXL Consortium?

The CXL Consortium is an open industry standard group formed to develop technical specifications that facilitate breakthrough performance for emerging usage models while supporting an open ecosystem for data center accelerators and other high-speed enhancements.​

4. CXL Protocols & Standards

The CXL standard supports a variety of use cases via three protocols: CXL.io, CXL.cache, and CXL.memory.

      • CXL.io: This protocol is functionally equivalent to the PCIe protocol—and utilizes the broad industry adoption and familiarity of PCIe. As the foundational communication protocol, CXL.io is versatile and addresses a wide range of use cases.
      • CXL.cache: This protocol, which is designed for more specific applications, enables accelerators to efficiently access and cache host memory for optimized performance.
      • CXL.memory: This protocol enables a host, such as a processor, to access device-attached memory using load/store commands.

Together, these three protocols facilitate the coherent sharing of memory resources between computing devices, e.g., a CPU host and an AI accelerator. Essentially, this simplifies programming by enabling communication through shared memory. The protocols used to interconnect devices and hosts are as follows:

  • Type 1 Devices: CXL.io + CXL.cache
  • Type 2 Devices: CXL.io + CXL.cache + CXL.memory
  • Type 3 Devices: CXL.io + CXL.memory

5. Compute Express Link vs PCIe: How Are They Related?

CXL builds upon the physical and electrical interfaces of PCIe with protocols that establish coherency, simplify the software stack, and maintain compatibility with existing standards. Specifically, CXL leverages a PCIe 5 feature that allows alternate protocols to use the physical PCIe layer. When a CXL-enabled accelerator is plugged into a x16 slot, the device negotiates with the host processor’s port at default PCI Express 1.0 transfer rates of 2.5 gigatransfers per second (GT/s). CXL transaction protocols are activated only if both sides support CXL. Otherwise, they operate as PCIe devices.

CXL 1.1 and 2.0 use the PCIe 5.0 physical layer, allowing data transfers at 32 GT/s, or up to 64 gigabytes per second (GB/s) in each direction over a 16-lane link.

CXL 3.1 uses the PCIe 6.1 physical layer to scale data transfers to 64 GT/s supporting up to 128 GB/s bi-directional communication over a x16 link.

6. CXL Features and Benefits

Streamlining and improving low-latency connectivity and memory coherency significantly bolsters computing performance and efficiency while lowering TCO. Moreover, CXL memory expansion capabilities enable additional capacity and bandwidth above and beyond the direct-attach DIMM slots in today’s servers. CXL makes it possible to add more memory to a CPU host processor through a CXL-attached device. When paired with persistent memory, the low-latency CXL link allows the CPU host to use this additional memory in conjunction with DRAM memory. The performance of high-capacity workloads depends on large memory capacities such as AI. Considering that these are the types of workloads most businesses and data-center operators are investing in, the advantages of CXL are clear.

7. CXL 2.0 and 3.1 Features

Diagram of CXL Memory Pooling Through Direct Connect
CXL Memory Pooling Through Direct Connect


Memory Pooling

CXL 2.0 supports switching to enable memory pooling. With a CXL 2.0 switch, a host can access one or more devices from the pool. Although the hosts must be CXL 2.0-enabled to leverage this capability, the memory devices can be a mix of CXL 1.0, 1.1, and 2.0-enabled hardware. At 1.0/1.1, a device is limited to behaving as a single logical device accessible by only one host at a time. However, a 2.0 level device can be partitioned as multiple logical devices, allowing up to 16 hosts to simultaneously access different portions of the memory.

As an example, a host 1 (H1) can use half the memory in device 1 (D1) and a quarter of the memory in device 2 (D2) to finely match the memory requirements of its workload to the available capacity in the memory pool. The remaining capacity in devices D1 and D2 can be used by one or more of the other hosts up to a maximum of 16. Devices D3 and D4, CXL 1.0 and 1.1-enabled respectively, can be used by only one host at a time.

CXL 3.1 introduces peer-to-peer direct memory access and enhancements to memory pooling where multiple hosts can coherently share a memory space on a CXL 3.1 device. These features enable new use models and increased flexibility in data center architectures.

Switching

By moving to a CXL 2.0 direct-connect architecture, data centers can achieve the performance benefits of main memory expansion—and the efficiency and total cost of ownership (TCO) benefits of pooled memory. Assuming all hosts and devices are CXL 2.0 (and above)-enabled, “switching” is incorporated into the memory devices via a crossbar in the CXL memory pooling chip. This keeps latency low but requires a more powerful chip since it is now responsible for the control plane functionality performed by the switch. With low-latency direct connections, attached memory devices can employ DDR DRAM to provide expansion of host main memory. This can be done on a very flexible basis, as a host is able to access all—or portions of—the capacity of as many devices as needed to tackle a specific workload.

CXL 3.1 introduces multi-tiered switching which enables the implementation of switch fabrics. CXL 2.0 enabled a single layer of switching. With CXL 3.1, switch fabrics are enabled, where switches can connect to other switches, vastly increasing the scaling possibilities.

The “As Needed” Memory Paradigm

Analogous to ridesharing, CXL 2.0 and 3.1 allocate memory to hosts on an “as needed” basis, thereby delivering greater utilization and efficiency of memory. With CXL 3.1, memory pooling can be reconfigured dynamically without the need for a server (host) reboot. This architecture provides the option to provision server main memory for nominal workloads (rather than worst case), with the ability to access the pool when needed for high-capacity workloads and offering further benefits for TCO. Ultimately, the CXL memory pooling models can support the fundamental shift to server disaggregation and composability. In this paradigm, discrete units of compute, memory and storage can be composed on-demand to efficiently meet the needs of any workload.

Integrity and Data Encryption (IDE)

Disaggregation—or separating the components of server architectures—increases the attack surface. This is precisely why CXL includes a secure by design approach. Specifically, all three CXL protocols are secured via Integrity and Data Encryption (IDE) which provides confidentiality, integrity, and replay protection. IDE is implemented in hardware-level secure protocol engines instantiated in the CXL host and device chips to meet the high-speed data rate requirements of CXL without introducing additional latency. It should be noted that CXL chips and systems themselves require safeguards against tampering and cyberattacks. A hardware root of trust implemented in the CXL chips can provide this basis for security and support requirements for secure boot and secure firmware download.

Scaling Signaling to 64 GT/s

CXL 3.1 brings a step function increase in data rate of the standard. As mentioned earlier, CXL 1.1 and 2.0 use the PCIe 5.0 electricals for their physical layer: NRZ signaling at 32 GT/s. CXL 3.1 keeps that same philosophy of building on broadly adopted PCIe technology and extends it to the latest 6.1 version of the PCIe standard released in early 2022. That boosts CXL 3.1 data rates to 64 GT/s using PAM4 signaling. We cover the details of PAM4 signaling in PCIe 6 – All you need to know.

8. Rambus CXL Solutions

Rambus CXL 3.1 Controller

The Rambus CXL 3.1 Controller leverages the Rambus PCIe 6.1 Controller [link to https://www.rambus.com/interface-ip/pci-express/pcie6-controller/] architecture for the CXL.io protocol and adds the CXL.cache and CXL.mem protocols specific to CXL. The controller exposes a native Tx/Rx user interface for CXL.io traffic as well as an Intel CXL-cache/mem Protocol Interface (CPI) for CXL.mem and CXL. There is also a CXL 3.1 Controller with AXI version of the core that is compliant with the AMBA AXI Protocol Specification (AXI3, AXI4 and AXI4-Lite).

Read on:
Rambus CXL Memory Initiative
Rambus CXL & PCI Express Controllers

Zero-Latency IDE

The Rambus CXL 3.1 and PCIe 6.1 controllers are available with integrated Integrity and Data Encryption (IDE) modules. IDE monitors and protects against physical attacks on CXL and PCIe links. CXL requires extremely low latency to enable load-store memory architectures and cache-coherent links for its targeted use cases. This breakthrough controller with a zero-latency IDE delivers state-of-the-art security and performance at full 32 GT/s speed.

The built-in IDE modules employ a 256-bit AES-GCM (Advanced Encryption Standard, Galois/Counter Mode) symmetric-key cryptographic block cipher, helping chip designers and security architects to ensure confidentiality, integrity, and replay protection for traffic that travels over CXL and PCIe links. This secure functionality is especially imperative for data center computing applications including AI/ML and high-performance computing (HPC).

Key features include:

      • IDE security with zero latency for CXL.mem and CXL.cache
      • Robust protection from physical security attacks, minimizing the safety, financial, and brand reputation risks of a security breach
      • IDE modules pre-integrated in Rambus CXL 3.1 and PCIe 6.1 controllers reduce implementation risks and speed time-to-market

Final Thoughts

CXL is a once-in-a-decade technological force that will transform data center architectures. Supported by a who’s who of industry players including hyperscalers, system OEMs, platform and module makers, chip makers and IP providers, its rapid development is a reflection of the tremendous value it can deliver.

This is why Rambus launched the CXL Memory Initiative—to research and develop solutions that enable a new era of data center performance and efficiency. Current Rambus CXL solutions include the Rambus CXL 3.1 Controller with integrated IDE.

]]>
https://www.rambus.com/blogs/compute-express-link/feed/ 0
Exploring Exascale Computing: Insights from SC23 https://www.rambus.com/blogs/exploring-exascale-computing-insights-from-sc23/ https://www.rambus.com/blogs/exploring-exascale-computing-insights-from-sc23/#respond Tue, 19 Dec 2023 17:46:38 +0000 https://www.rambus.com/?post_type=blogs&p=63578 Supercomputing 2023 brought together some of the brightest minds in the field of high-performance computing, showcasing the latest in exascale computing and the challenges faced in the pursuit of next-generation advances in computing. Talks by Scott Atchley from Oak Ridge National Laboratory and Stephen Pawlowski from Intel stood out for their valuable perspectives on the current state of supercomputing and future directions for the industry.

Frontier: Exploring Exascale

Scott Atchley, Distinguished R&D Staff Member and Chief Technology Officer, Oak Ridge National Laboratory’s National Center for Computational Science

Scott Atchley’s talk delved into the US supercomputer “Frontier” and its journey to meet the challenges of exascale computing. The goal was ambitious: achieving performance levels 1000 times higher than petascale systems deployed in 2008, all within a budget of 4x-6x compared to the previous generation.

Challenges identified by DARPA in 2008 when planning for Frontier included energy and power, memory and storage, concurrency and locality, and resiliency. Frontier successfully addressed these challenges, showcasing advancements in power efficiency, memory capacity and bandwidth, concurrency management, and resiliency. However, the need for a budget 4x-6x higher than the previous generation arose due to technology costs not declining by 1000x, which limited the growth of many resources compared to the previous generation of supercomputers. Components like storage and memory, particularly with the use of High Bandwidth Memory (HBM), proved more expensive.

The findings underscore the complexities of achieving exascale computing and the necessity of adapting to evolving technological landscapes, especially in the face of cost dynamics in storage and memory technologies.

A Perspective on 1000x Energy Efficiency

Stephen Pawlowski, Senior Fellow, Intel

In his keynote, Stephen Pawlowski discussed the challenges of achieving 1000x energy efficiency within the next two decades. With exascale supercomputers now a reality, early thoughts are being discussed about how to improve energy consumption and power efficiency, critical factors in achieving next-generation performance.

Pawlowski highlighted the significant energy and time consumed by data movement, especially between processors and memory. To address this, he proposed stacking high-performance memory on top of a System-on-Chip (SoC). This approach promises a 5-6x reduction in energy and a 10x boost in bandwidth. The potential benefits make it a compelling path forward for the industry.

However, challenges emerge, such as the need to standardize memory footprints, determine interconnect locations, manage thermals, and address issues like Error-Correcting Codes (ECC) and post-package repair.

SC23 Show Floor Highlights

The show floor featured many exciting developments, with the CXL consortium showcasing numerous demonstrations of CXL technology, including the Rambus CXL Platform Development Kit (PDK) announced at the show. The Rambus PDK marks an exciting step in the CXL journey that enables module and system makers to prototype and test CXL-based memory expansion and pooling solutions for AI.

Rambus CXL Platform Development Kit (PDK)
Rambus CXL Platform Development Kit (PDK)

AI remained a focal point at SC23, with several demos featuring cutting-edge AI platforms at both the chip and system levels. As models become larger and more sophisticated, continued advances in architecture and memory systems will be needed to keep up with these growing demands.

With computing performance continuing its upward trajectory, and with power efficiency improvements becoming more difficult with each new generation, there continue to be a noticeable increase in discussions and demonstrations of liquid cooling becoming pervasive in future data centers. There were also some intriguing immersion cooling demos, offering the promise of even greater cooling capabilities than traditional liquid cooling technology if needed by future systems.

Supercomputing 2023, through these talks and on the show floor, provided a glimpse into the relentless pursuit of higher performance, energy efficiency, and innovative solutions shaping the future of high-performance computing.

]]>
https://www.rambus.com/blogs/exploring-exascale-computing-insights-from-sc23/feed/ 0
CXL 3.1: What’s Next for CXL-based Memory in the Data Center https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/ https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/#respond Tue, 14 Nov 2023 16:44:40 +0000 https://www.rambus.com/?post_type=blogs&p=63435 Today (Nov. 14th, 2023) the CXL™ Consortium announced the continued evolution of the Compute Express Link™ standard with the release of the 3.1 specification. CXL 3.1, backward compatible with all previous generations, improves fabric manageability, further optimizes resource utilization, enables trusted compute environments, extends memory sharing and pooling to avoid stranded memory, and facilitates memory sharing between accelerators. When deployed, these improvements will boost the performance of AI and other demanding compute workloads.

Supercomputing 2023 (SC23), going on this week in Denver, provided the perfect backdrop for announcing this latest advancement in the CXL standard. At SC23, the Consortium is hosting demos from 16 ecosystem partners at the CXL pavilion (Booth #1301) including Rambus. There, we’re demonstrating the newly-announced Rambus CXL Platform Development Kit (PDK) performing memory tiering operations.

Data Center Memory Challenges
Three Big Data Center Memory Challenges

CXL memory tiering tackles one of the biggest hurdles for data center computing: namely the three order of magnitude latency gap that exists between direct-attached DRAM and Solid-State Drive (SSD) storage. When a processor runs out of capacity in DRAM main memory, it must go to SSD, which leaves the processor waiting. That waiting, or latency, has a dramatic negative impact on computing performance.

Two other big problems arise from 1) the rapid scaling of core counts in multi-core processors and 2) the move to accelerated computing with purpose-built GPUs, DPUs, etc. Core counts are rising far faster than main memory channels, with the upshot being that cores past a certain number are starved for memory bandwidth, sub-optimizing the benefit of those additional cores. With accelerated computing, a growing number of processors and accelerators have direct-attached memory to provide higher performance, but more independent pockets of memory leads to a higher probability of memory resources being underutilized or stranded.

Bridging the Latency Gap with CXL
CXL Memory Tiering Can Bridge the Latency Gap

CXL Memory Tiering Can Bridge the Latency Gap

CXL promises to address all three of these challenges. CXL enables new memory tiers which bridge the latency gap between direct-attached memory and SSD, and provide greater memory bandwidth to unlock the power of multi-core processors. In addition, CXL allows memory resources to be shared among processors and accelerators addressing the stranded memory challenge.

Rambus CXL PDK Add-in Card
Rambus CXL PDK Add-in Card

Delivering on the promise of CXL requires a great deal of co-design of solutions across the ecosystem spanning chips, IP, devices, systems and software. That’s where tools like the Rambus CXL PDK can make a major contribution. The PDK enables module and system makers to prototype and test CXL-based memory expansion and pooling solutions for AI infrastructure and other advanced systems. Interoperable with CXL 1.1 and CXL 2.0 capable processors, and memory from all the major memory suppliers, it leverages today’s available hardware to accelerate the development of the full stack of CXL-based solutions slated for deployment in a few years’ time.

For more information about the Rambus CXL PDK check out the Rambus CXL Memory Initiative here.

 

]]>
https://www.rambus.com/blogs/cxl-3-1-whats-next-for-cxl-based-memory-in-the-data-center/feed/ 0
A Recap of MemCon 2023 with Mark Orthodoxou https://www.rambus.com/blogs/a-recap-of-memcon-2023-with-mark-orthodoxou/ https://www.rambus.com/blogs/a-recap-of-memcon-2023-with-mark-orthodoxou/#respond Mon, 10 Apr 2023 18:23:43 +0000 https://www.rambus.com/?post_type=blogs&p=62636 We’re just back from MemCon, the industry’s first conference entirely devoted to all things memory. Running over the course of two days, the conference brought together attendees from across the memory ecosystem. We caught up with Mark Orthodoxou, VP Strategic Marketing for CXL Processing Solutions at Rambus and MemCon keynote speaker.

Why is memory so important for the future of advanced computing?

There are many reasons why memory is so important. Fundamentally, compute needs data sets to work on. One example is the case of AI inference. The latency of storage is too high for compute operations to rely on to run at speed.  With workload demands increasing rapidly, the need for more memory bandwidth and capacity continues to rise.

Core Counts Increasing, AI Models Growing in Size

What are some of the biggest system-level memory challenges that you see today?

Processor core counts are growing much faster than the memory bandwidth has been growing to service them on a per CPU basis. So, cores are underserviced and underutilized when running AI-driven, intensive workloads. Similarly, the need for memory capacity is growing. We have seen AI training models grow to enormous sizes in recent years, passing the teraparameter mark. In addition, memory is as much as 50% of the cost of data center servers. So, it is a focal point for not only total cost of ownership reduction, but also environmental and sustainability initiatives. The solution lies in both cost-effectively unlocking both memory bandwidth and memory capacity, and providing mechanisms for the reuse of memory from decommissioned servers.

Summary of Data Center Memory Challenges

Your presentation was called “CXL Technology: Revolutionizing the Data Center”. What is CXL and how will it revolutionize the data center?

CXL enables more memory and more memory bandwidth to be accessed by CPUs using industry standard ubiquitous physical interfaces, specifically PCIe (PCI Express), by overlaying a new coherent, low latency secure protocol. It will fundamentally change the architecture of servers, and even data centers by moving the memory controller off the CPU and into the hands of the data center architects. With CXL technology, the industry is pursuing tiered-memory solutions that can break through the memory bottleneck while at the same time delivering greater efficiency and improved TCO. Ultimately, CXL technology can support composable architectures that match the amount of compute, memory and storage in an on-demand fashion to the needs of a wide range of advanced workloads.

The CXL Enabled Server

What did you enjoy most about MemCon?

Memory is something that I am very passionate about, and it’s always great to get together with industry peers who are equally passionate about all things memory. The MemCon event was a great opportunity to hear from all perspectives across the industry, whether that be those working on challenges at a silicon level, enabling software or deployment into massive hyperscale data centers. The event highlighted for me that collaboration across all layers of the memory ecosystem is of vital importance to ensure that we can enable new memory capabilities in the data center and deliver standardized memory solutions.

]]>
https://www.rambus.com/blogs/a-recap-of-memcon-2023-with-mark-orthodoxou/feed/ 0
Even More CXL Webinar Q&A! https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/ https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/#respond Thu, 08 Dec 2022 20:37:31 +0000 https://www.rambus.com/?post_type=blogs&p=62158 We had such a great response to last week’s More CXL Webinar Q&A blog that we decided to reprise the questions answered live in our webinar How CXL Technology will Revolutionize the Data Center (available on-demand). Hope this provides more insights on the capabilities of Compute Express Link™ (CXL™) technology.

Click on a link below to jump to a specific question:

  1. There have been multiple interconnect standards proposed in the past: CXL, Gen-Z, OpenCAPI and CCIX. Is CXL going to be the one that finally gets adopted?
  2. What is memory coherence and why is it important?
  3. Will CXL be a positive or negative driver of overall memory capacity per server going forward given it provides a more efficient means of memory use?
  4. What are the long-term implications for the RDIMM market given CXL can now be used to attach DRAM to the CPU?
  5. How is compliance handled in CXL? How do you know if a device is “CXL-compliant”?
  6. For the first wave of CXL products that introduce memory expansion, what’s the biggest demand from customers…more bandwidth or more capacity?

1. There have been multiple interconnect standards proposed in the past: CXL, Gen-Z, OpenCAPI and CCIX. Is CXL going to be the one that finally gets adopted?

There is wide industry support for CXL, and the market leading platform makers Intel® and AMD®, and all the leading CSPs, are supporting it. CXL has the level of industry support you must achieve if a new technology with as broad a reach as CXL is to be successful. Gen-Z as well as OpenCAPI assets are now combined with CXL. This further strengthens CXL by allowing the best parts of all these standards to be combined over time under the CXL umbrella. Also, CXL adopts the PCI Express® (PCIe®) physical layer, so it benefits from the ubiquitous deployment of PCIe in the data center.

2. What is memory coherence and why is it important?

Coherence means that when two processors share a piece of data, they can be guaranteed that if one processor updates the data, that the other processor will have access to those updates. Hardware guarantees this, and this makes the task of programming much, much easier – helping programmers to develop more complex applications that require cores to share data more easily.

Now in the context of CXL – CXL provides for the same memory coherence mechanisms that are used today with native DRAM. Furthermore, there are some additional hardware coherency support capabilities in later versions of the specification.

3. Will CXL be a positive or negative driver of overall memory capacity per server going forward given it provides a more efficient means of memory use?

In fact, we think CXL will end up increasing memory demand. Fundamentally the reason for CXL to come into existence is the need for more memory. But as you point out it does also allow for more memory efficiency.

New technology to improve efficiency is a natural evolution in the data center. We can look back at a couple of similar situations over the past 20 years and see how they played out.

Back when multi-core CPUs were introduced, there was a concern that fewer CPUs would be sold. But in reality, multi-core CPUs opened up new capabilities that software developers took advantage of to build new applications. In the end, there was more demand for multi-core CPUs, and the market grew.

Virtualization of the server was thought likely to drive a reduction in capex on server equipment, because servers were becoming more efficient. Instead, the market grew as business models like Infrastructure as a Service and Platform as a Service business models thrived.

What we’ve seen over the years is that if you give software developers access to more tools, they create new workloads. Business leaders develop new product offerings. All of this drives a virtuous cycle of greater hardware market growth.

The industry as a whole is just scratching the surface for AI/ML use cases. CXL will enable new memory capacity, bandwidth, and tiering which will drive continued significant investment in memory infrastructure to enable these AI/ML workloads.

So back to efficiency – making memory more efficient is incredibly important because companies and cloud service providers are investing so much in memory to satisfy their insatiable need for more bandwidth and capacity. Capex dollars will continue to be spent on new memory, and CXL will help companies get much greater utility out of that spend.

4. What are the long-term implications for the RDIMM market given CXL can now be used to attach DRAM to the CPU?

Well, the RDIMM market is here to stay. Although CXL provides new options for memory tiering, data center operators will continue to want to have a significant amount of memory as close to the processing cores as possible with the lowest possible latency. RDIMMs address this need very well and will continue to address this tier of the memory hierarchy.

In addition, as we showed during the presentation, there are many use cases for CXL-attached memory modules which use RDIMMs. Cloud Service Providers have long relationships buying RDIMMs, they know how to use them, and they get an easily serviceable module in the data center. Therefore, we believe the market for RDIMMs will increase as a result of CXL.

5. How is compliance handled in CXL? How do you know if a device is “CXL-compliant”?

A: The CXL Consortium is in development of a compliance program very similar to that of the PCI-SIG. We should expect the first CXL 1.1-compliant devices to appear on an integrators list next year (2023).

6. For the first wave of CXL products that introduce memory expansion, what’s the biggest demand from customers…more bandwidth or more capacity?

Great question and one that I get a lot. The simple answer is that it is really workload dependent. But the good news is that CXL can deliver both bandwidth and capacity.

Cores are Underserviced by Available Memory Bandwidth
Cores are Underserviced by Available Memory Bandwidth. Source.

If we go back to that graph (see above) I showed earlier in the webinar – where we hit a wall with the number of cores that we can serve with direct memory channels – that’s first order a bandwidth problem. But as soon as you unlock cores to do more work, generally speaking, those cores are going to require access to more memory capacity – whether it be more hot memory, warm memory, or cold memory. This new capacity and these new memory tiers are also enabled by the same CXL devices that can deliver the additional bandwidth.

There are workloads that are biased either to more bandwidth or to more capacity. For example, web search and content provider use cases would trade additional bandwidth for additional capacity. While in many AI/ML inference models or in memory databases, capacity is extremely important. So, it really is a mix of needs dependent on the workload.

The beauty of CXL is that it provides for a wide variety of memory tiers that can be adopted in the right combination to best serve the needs of a given workload while delivering lowest overall Total Cost of Ownership. When overlayed with software, like that which will come out of initiatives like the recently announced OCP Composable Memory System workgroup, the possibilities are very exciting.

If you have any questions regarding CXL technology or Rambus products, feel free to ask them here.

]]>
https://www.rambus.com/blogs/even-more-cxl-webinar-q-and-a/feed/ 0
More CXL Webinar Q&A https://www.rambus.com/blogs/more-cxl-webinar-q-and-a/ https://www.rambus.com/blogs/more-cxl-webinar-q-and-a/#respond Thu, 01 Dec 2022 21:21:10 +0000 https://www.rambus.com/?post_type=blogs&p=62141 In our recent webinar How CXL Technology will Revolutionize the Data Center (available on-demand), we received far more questions than we had time to address during the scheduled Q&A. What follows are answers to many of those questions providing greater context on the capabilities of Compute Express Link™ (CXL™) technology.

Click on a link below to jump to a specific question:

  1. Is there a maximum number of compute nodes/memory nodes allowed in the case of CXL memory pooling and switching?
  2. Today’s dense servers (including OCP servers) often have all the front and back panel ports populated. What are your thoughts on how to make more ports available on today’s servers for CXL connections?
  3. What are the release dates for CXL 1.0, 2.0, and 3.0?
  4. Does CXL 2.0 support link encryption?
  5. Can CXL support chiplet implementations?
  6. How much end-to-end latency in nanoseconds (ns) does a CXL link add? Is there a breakdown for each component/layer?
  7. Are there benefits to using optical technologies, for example co-packaged optics, to implement CXL Fabrics?
  8. How does CXL-attached memory compare to HBM as far as bandwidth and capacity are concerned?

1. Is there a maximum number of compute nodes/memory nodes allowed in the case of CXL memory pooling and switching?

While CXL does put upper bounds on the number of Type 1 and Type 2 end points in a pooled or switched architecture, the number of Type 3 end points in practice are dictated more by latency requirements and efficiency goals.

What we have seen is that scaling a specific pooling element beyond a certain number of ports carries with it a latency penalty that end customers do not want to pay. However, too few ports does not deliver the required efficiency gains. Our observation is the industry seems to be converging on pooled memory across anywhere from 4 to 16 compute nodes in at least first-generation implementations.

2. Today’s dense servers (including OCP servers) often have all the front and back panel ports populated. What are your thoughts on how to make more ports available on today’s servers for CXL connections?

Drive slots in the front of a server and add-in card slots in the back of a server are indeed a precious resource in today’s data center. Ultimately, workload needs drive how the resources in a server are used. Compute servers (approx. 2/3 of all servers in data centers worldwide by our estimation) are more likely to take advantage of CXL-attached memory and sacrifice some PCI Express® (PCIe®) lanes and front or back of server slots for memory expansion or for connectivity to pooled memory. Storage servers less so.

The introduction of CXL may well change server architectures, everything from new memory module form factors to new backplanes to new rack mount server or appliance form factors, in order to allow for CXL-attached memory. In fact we’ve already seen evidence in various standards bodies making attempts to introduce such new form factors and architectures. Also, with every new CPU generation, the number of PCIe (and now PCIe/CXL) lanes are increasing, providing more opportunities for attachment in either the front or back of server.

3. What are the release dates for CXL 1.0, 2.0, and 3.0?

CXL 1.1 was released in March 2020, this was the first release. CXL 2.0 was released in November 2020, and CXL 3.0 was released in August 2022. The CXL Consortiums allows for ECNs to add optional capabilities between major releases.

4. Does CXL 2.0 support link encryption?

Yes, CXL 2.0 includes Link-level Integrity and Data Encryption (CXL IDE) as an optional capability. Leveraged from PCIe IDE, CXL IDE provides for a secure connection relying on AES-GCM cryptography.

5. Can CXL support chiplet implementations?

A: Yes, CXL can universally be used for chiplet-to-chiplet (3D stacked or otherwise), package-to-package, or even system-to-system communication. The only limitation is that of physical reach due to the signal integrity characteristics of a given SerDes and the channels that are being driven. Both are implementation dependent. Longer reaches can be enabled through the use of retimers or active optical cables of course.

It should also be noted that CXL is supported as a protocol layer for Universal Chiplet Interconnect Express™ (UCIe™). The UCIe specification, announced in March 2022, is a new open standard for chiplet interconnect introduced by Intel®, AMD®, Arm®, and all the leading-edge foundries.

6. How much end-to-end latency in nanoseconds (ns) does a CXL link add? Is there a breakdown for each component/layer?

Latency adders are very implementation dependent. A package-to-package link delay between, say, a CPU and a CXL memory expander is often modeled as 4ns. However, different trace lengths, connector options, etc. will impact that. The latency introduced by the CXL logic in the CPU and the CXL memory expander are implementation dependent. The generally accepted target for total round trip unloaded read latency, including media access, is equivalency to one NUMA hop in a multi-socket compute architecture today (<100ns).

7. Are there benefits to using optical technologies, for example co-packaged optics, to implement CXL Fabrics?

Yes. Optical technology in general provides for much longer reach than copper.. Co-packaged optics is an implementation option for future CXL-based architectures and has the additional benefit of allowing for lower power SerDes since the electrical connection can be shorter vs. discrete optical modules. As co-packaged optics technology continues to evolve this will prove an interesting deployment option for the industry where applicable – most likely in rack-level cabled solutions that involve CXL pooling or switching.

8. How does CXL-attached memory compare to HBM as far as bandwidth and capacity are concerned?

CXL memory can be any kind of memory, depending on what type of media controller the CXL memory expander supports. However, HBM memory is much higher bandwidth than any compliant CXL port. After all, HBM achieves its bandwidth by leveraging a 1024-pin-wide bus. The bi-directional bandwidth of CXL is bounded by the data rate of the lanes used to form the CXL port. For example, an 8-wide CXL 2.0 port running on PCIe Gen 5 electricals at 32GT/s delivers 32GB/s in each direction. In practice, due to FLIT-packing, the effective bi-directional bandwidth is less than that. For simplicity, you can consider an 8-lane CXL 2.0 port to deliver bandwidth that is equivalent to a DDR5-5600 RDIMM. So, considerably less bandwidth than HBM, but at much greater pin efficiency. At the end of the day, the goal of CXL is to deliver more memory capacity and bandwidth to CPUs in the most pin-efficient manner possible with the best latency profile possible, so it is solving a very different problem than HBM. If maximum bandwidth is desired, regardless of pin count, then HBM is still the most effective way to achieve that goal.

If you have any questions regarding CXL technology or Rambus products, feel free to ask them here.

]]>
https://www.rambus.com/blogs/more-cxl-webinar-q-and-a/feed/ 0