Rambus https://www.rambus.com/ At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Tue, 10 Feb 2026 21:28:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 John Allen https://www.rambus.com/leadership/john-allen/ Tue, 10 Feb 2026 21:22:03 +0000 https://www.rambus.com/?page_id=66034
John Allen

John Allen

Vice President & Chief Accounting Officer

John Allen is vice president and chief accounting officer at Rambus. John joined the company in 2023 and leads the global accounting organization with over 40 years of finance experience.

Before joining Rambus, John served as senior vice president and corporate controller at Xperi (formerly Tessera), including a period as acting chief financial officer. Prior to that, he held senior finance leadership roles at Corsair as vice president, corporate controller, and at Leadis Technology, where he served as chief financial officer as well vice president, corporate controller. John has extensive semiconductor experience, holding finance and accounting positions at leading technology companies including Advanced Micro Devices (AMD) and Xilinx.

John holds a Bachelor of Arts in business economics from the University of California, Santa Barbara.

]]>
Rambus Announces Departure of Chief Financial Officer https://www.rambus.com/rambus-announces-departure-of-chief-financial-officer-10-feb-26/ https://www.rambus.com/rambus-announces-departure-of-chief-financial-officer-10-feb-26/#respond Tue, 10 Feb 2026 21:01:37 +0000 https://www.rambus.com/?p=66030 SAN JOSE, Calif. – February 10, 2026 – Rambus Inc. (NASDAQ: RMBS), a premier chip and silicon IP provider making data faster and safer, today announced that Desmond Lynch, senior vice president and chief financial officer (CFO), will resign from Rambus effective February 27, 2026, to pursue another opportunity. A formal search has commenced for a new CFO. John Allen, current vice president and chief accounting officer at Rambus, will serve as interim CFO and ensure a seamless transition until a permanent successor has been appointed.

“Des has been a valued partner in supporting the company’s continued momentum, and we thank him for his many contributions,” said Luc Seraphin, chief executive officer at Rambus. “With John serving as interim CFO, backed by our strong finance organization, we are confident in our ongoing ability to execute on our growth strategy and deliver long‑term value.”

“It has been a privilege to serve as CFO of Rambus and work alongside such a talented global team,” said Desmond Lynch. “I am proud of the financial and operational milestones Rambus achieved, and look forward to following the continued success of the company.”

Separately, Rambus is reaffirming its previously issued guidance for the first quarter of fiscal year 2026.

About Rambus Inc.
Rambus delivers industry-leading chips and silicon IP for the data center and AI infrastructure. With over three decades of advanced semiconductor experience, our products and technologies address the critical bottlenecks between memory and processing to accelerate data-intensive workloads. By enabling greater bandwidth, efficiency and security across next‑generation computing platforms, we make data faster and safer. For more information, visit rambus.com.

Forward-Looking Statements
This release contains forward-looking statements under the Private Securities Litigation Reform Act of 1995, including those relating to the Company’s outlook and financial guidance for the first quarter of 2026. Such forward-looking statements are based on current expectations, estimates and projections, management’s beliefs and certain assumptions made by the Company’s management. Actual results may differ materially. The Company’s business generally is subject to a number of risks which are described more fully in Rambus’ periodic reports filed with the Securities and Exchange Commission. The Company undertakes no obligation to update forward-looking statements to reflect events or circumstances after the date hereof.

Contact:

Nicole Noutsios
Rambus Investor Relations
(510) 315-1003
rambus@nmnadvisors.com

 

 

]]>
https://www.rambus.com/rambus-announces-departure-of-chief-financial-officer-10-feb-26/feed/ 0
Simon Blake-Wilson https://www.rambus.com/leadership/simon-blake-wilson/ Tue, 10 Feb 2026 20:24:42 +0000 https://www.rambus.com/?page_id=66031
Simon Blake-Wilson

Dr. Simon Blake-Wilson

Senior Vice President & General Manager of Silicon IP

Dr. Simon Blake-Wilson joined Rambus in January 2026 and currently serves as the Senior Vice President and General Manager of Silicon IP at Rambus.  He is responsible for the development and growth of the company’s silicon IP products, driving high-performance, secured memory and interconnect architectural innovation in Data Center and Edge Connectivity applications. 

Throughout his career, Simon has focused on driving commercial adoption of advanced technologies. He began his career as Director of Research at Certicom, driving adoption of Elliptic Curve Cryptography (ECC). He authored prominent ECC standards in international organizations including ANSI, ICAO, IEEE, and IETF. He also wrote the SEC ECC standards which provide the cryptographic foundation in bitcoin. ECC evolved into the dominant cryptographic technology on the Internet over the last 20 years, with its replacement by next generation post-quantum cryptography finally now underway.

Subsequently Simon served as Vice President, Embedded Security Solutions at AuthenTec, driving the adoption of semiconductor-based fingerprint sensors. He was a member of the executive team that sold AuthenTec to Apple, where the technology is marketed as Touch ID® and has driven broad adoption of fingerprints and biometrics more generally in consumer applications.

Most recently Simon spent the last 5 years prior to joining Rambus as Vice President of Sales at FARO, a leading technology provider in the metrology space, providing advanced measurement solutions into diverse markets including manufacturing, construction, and public safety. FARO was acquired by Ametek in July 2025.

Simon holds a Ph.D. in Math from Royal Holloway, University of London and was a Fulbright student at Auburn University. As evidence that “the apple doesn’t fall far from the tree”, Simon recently discovered that his grandfather worked at Bletchley Park during WWII with the teams led by Alan Turing and Tommy Flowers to develop the world’s first computer and break the German Enigma codes in the events captured in the Hollywood movie, “The Imitation Game”. Like many of his generation, Simon’s grandfather took secrecy seriously and died without ever mentioning his work during the war to any friends or family.

]]>
DEEPX, Rambus, and Samsung Foundry Collaborate to Enable Efficient Edge Inferencing Applications https://www.rambus.com/blogs/deepx-rambus-and-samsung-foundry-collaborate-to-enable-efficient-edge-inferencing-applications/ https://www.rambus.com/blogs/deepx-rambus-and-samsung-foundry-collaborate-to-enable-efficient-edge-inferencing-applications/#respond Tue, 10 Feb 2026 18:00:05 +0000 https://www.rambus.com/?post_type=blogs&p=66027 As artificial intelligence (AI) continues to proliferate across industries – from smart cities and autonomous vehicles to industrial automation, robotics, edge servers, and consumer electronics – edge inferencing has become a cornerstone of next-generation computing. Delivering real-time, low-power AI processing at the edge requires close coordination across AI compute architectures, memory subsystems, and silicon platforms. To meet these demands, DEEPX is collaborating with Rambus and Samsung Foundry to deliver a highly optimized solution that combines efficient AI compute, high-bandwidth memory interfaces, and advanced logic process technology.

A Proven Foundation Scaling Forward

As the foundation of this collaboration, DEEPX worked with Rambus and Samsung Foundry on the DX-M1 AI processor, fabricated using Samsung Foundry’s 5nm technology and integrating silicon-proven LPDDR5 controller IP from Rambus. DX-M1 has been deployed across a range of edge applications, including robotics, edge servers, AI-enabled IT services, smart cameras, and factory automation. Looking to the next generation of edge AI, DEEPX is developing the DX-M2 processor for ultra-low-power generative AI inference on edge devices using Samsung Foundry’s 2nm process technology. Samsung Foundry’s GAA-based 2nm platform is designed to deliver further improvements in power efficiency and performance scaling as edge AI workloads grow in complexity.

Through the Samsung Advanced Foundry Ecosystem (SAFETM) IP Alliance, Rambus works closely with Samsung Foundry to optimize its memory controller IP for advanced Samsung process technologies, enabling DEEPX to integrate proven IP more efficiently, lower design risk, and accelerate time to production for next-generation designs.

A Unified Solution for Edge AI

The collaboration between DEEPX, Rambus, and Samsung Foundry brings together three core pillars of edge inferencing:

  • AI Inference Technology: DEEPX contributes its ultra-efficient AI inference processors, designed to deliver high performance with minimal power consumption—ideal for endpoint devices such as AI PC, AI of Things, automotive, edge server, robotics, and industrial sensors.
  • High Performance Memory:  Rambus enhances memory performance with its LPDDR5/5X memory controller IP, which supports data rates up to 9.6 Gbps and features advanced bank management, command queuing, and look-ahead logic to maximize throughput and minimize latency.
  • Advanced Process Technology:  Samsung Foundry provides the silicon platform and ecosystem enablement that support DEEPX’s edge AI development, helping reduce integration complexity and improve design predictability through advanced logic processes and the SAFETM Alliance. Samsung Foundry’s 2nm GAA – process technology represents a key next step for DEEPX’s DX-M2 processor, supporting further gains in power efficiency and performance scaling.

Together, these technologies empower edge devices to run complex AI workloads locally, with low power and performance efficiency, setting the stage for the next generation of edge inferencing.

Optimized Memory for AI Inference

The Rambus LPDDR5/5X memory controller IP is purpose-built for applications requiring high memory throughput at low power. It supports features such as:

  • Queue-based user interface with reordering scheduler
  • Look-ahead activate, precharge, and auto-precharge logic
  • Support for burst lengths BL16 and BL32
  • Parity protection and in-line ECC
  • Compatibility with LPDDR5T, LPDDR5, and LPDDR5X devices
  • Interoperability with Samsung LPDDR5/5X PHY

These capabilities are essential for AI inference, where memory bandwidth and latency directly impact model responsiveness and accuracy.

The Value of Samsung Foundry’s “One-Stop-Shop” Model

Samsung Foundry brings together advanced logic process technology and a tightly aligned SAFETM IP ecosystem through a vertically integrated technology stack that simplifies complex programs. By coordinating cutting-edge logic processes -, IP readiness, and manufacturing considerations earlier in the design cycle, Samsung Foundry helps reduce multi-vendor friction, improves integration efficiency, and accelerates time-to-market.

For edge AI applications such as DEEPX’s DX-M roadmap, Samsung Foundry’s scalable process portfolio – from FinFET to leading-edge 2nm GAA – supports aggressive power-performance targets while maintaining manufacturability. Through collaboration with the SAFETM ecosystem, memory controller IP from partners like Rambus can be efficiently integrated, helping reduce risk and accelerate time to silicon.

This ecosystem-driven model allows customers to focus on AI architecture and application differentiation, while relying on a stable and scalable silicon platform to support current and future edge AI designs.

Empowering the AI Revolution at the Edge

This collaboration exemplifies the power of ecosystem synergy. By combining DEEPX’s AI compute innovation, Samsung Foundry’s manufacturing excellence and ecosystem enablement, and Rambus’ memory interface leadership the trio is enabling a new generation of edge devices that are smarter, faster, and more secure.

Whether it’s enabling real-time object detection in smart cameras, predictive maintenance in industrial systems, or intelligent navigation in autonomous drones, the joint solution is poised to transform how AI is deployed at the edge.

Looking Ahead: Pushing the Boundaries with LPDDR6

Looking ahead, DEEPX and Rambus are extending their collaboration to the next frontier: LPDDR6 & LPDDR6-PIM (Processing In Memory). As AI models grow in complexity and demand even greater memory bandwidth, LPDDR6 is poised to deliver speeds exceeding 9.6 Gbps, while reducing operational power by up to 30% compared to LPDDR5X.

DEEPX, with its roadmap for next-generation AI chips like the DX-M2, is aligning its architecture to take full advantage of LPDDR6’s capabilities.

This forward-looking collaboration underscores the trio’s commitment to redefining what’s possible in edge AI—delivering smarter, faster, and more efficient solutions that scale with the future of computing.

]]>
https://www.rambus.com/blogs/deepx-rambus-and-samsung-foundry-collaborate-to-enable-efficient-edge-inferencing-applications/feed/ 0
SDRAM https://www.rambus.com/chip-interface-ip-glossary/sdram/ Fri, 06 Feb 2026 00:25:48 +0000 https://www.rambus.com/?page_id=66024

SDRAM (Synchronous Dynamic Random Access Memory)

What is SDRAM (Synchronous Dynamic Random Access Memory)?

SDRAM is a type of dynamic random access memory (DRAM) that synchronizes its operations with the system bus clock, allowing for predictable and high-speed data access. Unlike asynchronous DRAM, SDRAM uses a clock signal to coordinate memory access, enabling pipelined operations and improved throughput. It is widely used in computers, embedded systems, and consumer electronics.

How SDRAM works

SDRAM operates in sync with the CPU or memory controller clock. It divides memory into banks that can be accessed independently, allowing for interleaved access and burst transfers. Commands such as ACTIVATE, READ, WRITE, and PRECHARGE are issued in timed sequences, enabling efficient scheduling and pipelining. The synchronous nature of SDRAM allows it to queue multiple instructions and execute them in rapid succession, reducing latency and increasing bandwidth.

What are the key features of SDRAM?

  • Clock-synchronized interface
  • Multiple internal banks for parallel access
  • Burst read/write capability
  • Support for auto-refresh and self-refresh modes
  • Typically operates at voltages between 3.3V and 1.8V
  • Available in DDR (Double Data Rate) variants: DDR, DDR2, DDR3, DDR4, DDR5
 

What are the benefits of SDRAM?

  • Predictable Timing: Synchronization with the system clock simplifies controller design and improves reliability.
  • High Throughput: Supports burst mode and pipelining for faster data access.
  • Scalability: Available in various densities and configurations for different applications.
  • Cost-Effective: Mature technology with widespread availability and low production cost.
 

Enabling Technologies

SDRAM is foundational in:

  • Desktop and laptop memory modules
  • Embedded systems and microcontrollers
  • Graphics cards and game consoles
  • Networking equipment and industrial controllers
  • Mobile devices using LPDDR variants

Modern SDRAM implementations are enhanced by:

  • Memory controllers with advanced scheduling and error correction
  • Interface standards like JEDEC DDR specifications
  • Power management features for mobile and low-power applications
 

Rambus Technologies

Rambus offers DDR Controller IP that offers SDRAM feature support, including 3DS device configurations, Write CRC, Data bus inversion (DBI), Fine granularity refresh, Additive latency, Per-DRAM addressability, and Temperature-controlled refresh. Learn more here.

]]>
RTL (Register Transfer Level) https://www.rambus.com/chip-interface-ip-glossary/rtl/ Fri, 06 Feb 2026 00:08:47 +0000 https://www.rambus.com/?page_id=66023

RTL (Register Transfer Level)

What is RTL (Register Transfer Level)?

Register Transfer Level (RTL) is a design abstraction used in digital circuit design that describes the flow of data between hardware registers and the logical operations performed on that data. RTL is a foundational concept in hardware description languages (HDLs) like Verilog and VHDL, and is used to model, simulate, and synthesize digital systems such as processors, memory controllers, and custom accelerators.

How RTL works

At the RTL level, designers specify how data moves between registers and how it is transformed by combinational logic in response to clock signals and control inputs. RTL code defines:

  • Registers: Storage elements that hold data.
  • Combinational Logic: Operations like addition, comparison, or multiplexing.
  • Control Logic: Determines when and how data is transferred.

RTL design is typically the first step in the hardware development lifecycle, followed by simulation, synthesis into gate-level netlists, and physical implementation.

What are the key features of RTL?

  • Describes synchronous digital logic
  • Supports modular and hierarchical design
  • Enables simulation and testbench development
  • Compatible with synthesis tools for ASIC and FPGA targets
  • Facilitates formal verification and linting
 

What are the benefits of RTL?

  • Precise Control: Enables detailed specification of timing and data flow.
  • Simulation-Friendly: Supports functional verification before synthesis.
  • Reusable: RTL modules can be reused across multiple designs and projects.
  • Tool-Compatible: Works with EDA tools for synthesis, timing analysis, and formal verification.
 

Enabling Technologies

RTL design is supported by:

  • HDLs
  • EDA tools
  • Simulation environments
  • FPGA toolchains
  • Formal verification tools
]]>
Root Port https://www.rambus.com/chip-interface-ip-glossary/root-port/ Fri, 06 Feb 2026 00:00:24 +0000 https://www.rambus.com/?page_id=66022

Root Port

What is Root Port?

In PCI Express (PCIe) architecture, a Root Port is a type of port located in the Root Complex, which connects the CPU and memory subsystem to PCIe devices. It initiates PCIe transactions and manages communication between the host system and downstream components such as endpoints, switches, and bridges. Root Ports are essential for system initialization, configuration, and data transfer in PCIe-based platforms.

How Root Port works

The Root Port acts as the origin point for PCIe traffic. During system boot-up, it performs device enumeration, link training, and configuration space access for all connected PCIe devices. It supports transaction layer packets (TLPs) for reads, writes, and interrupts, and handles flow control, error reporting, and power management. In systems with multiple Root Ports, each port can independently manage its own hierarchy of devices, enabling parallelism and scalability.

What are the key features of Root Port?

  • Initiates PCIe transactions and manages link states
  • Supports Gen 1 to Gen 6 PCIe speeds
  • Handles interrupt signaling (e.g., MSI, MSI-X)
  • Implements error detection and reporting (e.g., ECRC, AER)
  • Integrates power management (L0s, L1, L2 states)
  • Compatible with virtualization technologies (SR-IOV, IOMMU)
 

What are the benefits of Root Port?

  • Centralized Control: Manages PCIe topology and device configuration.
  • Scalability: Supports multiple downstream devices and hierarchies.
  • Performance Optimization: Enables high-bandwidth, low-latency communication.
  • Protocol Compliance: Ensures compatibility with PCIe specifications and features like MSI/MSI-X, AER, and hot-plug.
 

Enabling Technologies

Root Ports are implemented in:

  • Server and desktop CPUs with integrated PCIe controllers
  • SoCs and FPGAs for embedded and custom platforms
  • CXL Root Complexes for memory and accelerator sharing
  • Operating systems that manage PCIe enumeration and resource allocation
  • Virtualization platforms for device passthrough and isolation
 

Rambus Technologies

Rambus offers PCI Express Controller IP that can be configured to support root port.

]]>
Reorder Functionality https://www.rambus.com/chip-interface-ip-glossary/reorder-functionality/ Thu, 05 Feb 2026 23:21:04 +0000 https://www.rambus.com/?page_id=66020

Reorder Functionality

What is Reorder Functionality?

Reorder Functionality refers to the capability within high-speed data transmission systems, such as memory controllers, interconnect protocols (e.g., PCIe, CXL), and network-on-chip (NoC) architectures, to restore the correct sequence of data packets or memory transactions that arrive out of order. This is essential in systems that support parallelism, multi-threading, or multi-path routing, where performance optimization may lead to out-of-order delivery.

How Reorder Functionality works

In modern computing systems, data is often transmitted across multiple lanes or paths to maximize throughput. These paths may introduce latency variations, causing packets or memory operations to arrive at their destination in a different order than they were issued. Reorder functionality uses sequence identifiers, tags, or transaction IDs to track and buffer incoming data. Once all required elements are received, the system reorders them to match the original sequence before forwarding them to the next processing stage.

This logic is typically embedded in:

  • Memory controllers to maintain consistency in read/write operations.
  • PCIe/CXL transaction layers to ensure protocol compliance.
  • Network-on-Chip routers to support deterministic behavior in multicore SoCs.
 

What are the key features of Reorder Functionality?

  • Tag-based tracking of transactions
  • Buffering and sequencing logic
  • Integration with ECC and error detection mechanisms
  • Support for multi-lane and multi-path routing
  • Transparent operation to software layers
 

What are the benefits of Reorder Functionality?

  • Data Integrity: Ensures correct execution order for memory and I/O operations.
  • Protocol Compliance: Maintains consistency with standards like PCIe and CXL.
  • Performance Optimization: Allows out-of-order transmission for higher throughput while preserving logical order.
  • System Reliability: Prevents race conditions and data corruption in concurrent environments.
 

Enabling Technologies

Reorder functionality is critical in:

  • PCIe 5.0/6.0 and CXL 2.0/3.0 interconnects
  • DDR5/LPDDR5 memory controllers
  • AI/ML accelerators and HPC systems
  • SoCs and FPGAs with parallel data paths
  • Cache-coherent fabrics and transaction-level protocols
 

Rambus Technologies

Rambus provides HBM technology that offers Integrated Reorder Functionality as an add-on core. Learn more here.

]]>
Reed-Solomon (RS) Code https://www.rambus.com/chip-interface-ip-glossary/reed-solomon-code/ Thu, 05 Feb 2026 22:54:07 +0000 https://www.rambus.com/?page_id=66019

Reed-Solomon (RS) Code

What is Reed-Solomon (RS) Code?

Reed-Solomon (RS) is a powerful error correction code (ECC) used to detect and correct multiple symbol errors in digital data transmissions and storage. Developed by Irving S. Reed and Gustave Solomon in 1960, RS codes are widely used in applications where data integrity is critical, such as optical discs (CDs, DVDs), QR codes, satellite communications, wireless systems, and solid-state drives (SSDs).

How RS Code works

Reed-Solomon codes treat data as a set of symbols (typically bytes) and add redundant parity symbols based on polynomial mathematics over Galois Fields (GF). For a code defined as RS(n, k), where:

  • n = total number of symbols (data + parity)
  • k = number of data symbols
  • n – k = number of parity symbols

The code can correct up to (n – k) / 2 symbol errors in a block. For example, RS(255, 223) can correct up to 16 symbol errors in a 255-symbol block.

The encoding process generates parity symbols using polynomial division, and the decoding process uses algorithms like Berlekamp-Massey or Euclidean algorithm to detect and correct errors.

What are the key features of RS Code?

  • Operates on symbols (not individual bits), making it ideal for burst error correction
  • Configurable code parameters (n, k) for different protection levels
  • Works over Galois Fields (GF(2^m)), typically GF(256) for byte-level operations
  • Supports both systematic (original data included) and non-systematic formats
  • Compatible with hardware and software implementations
 

What are the benefits of RS Code?

  • Robust Error Correction: Can correct burst errors and multiple symbol errors.
  • Data Integrity: Ensures reliable data recovery in noisy or lossy environments.
  • Versatility: Applicable to both storage and communication systems.
  • Efficiency: Adds minimal overhead relative to the level of protection provided.
 

Enabling Technologies

Reed-Solomon codes are integral to:

  • Digital storage media (CD, DVD, Blu-ray)
  • Wireless and satellite communications (e.g., DVB, 5G NR)
  • Data transmission protocols (e.g., DSL, WiMAX)
  • Barcodes and QR codes
  • RAID systems and flash memory controllers
]]>
Read-Modify-Write (RMW) https://www.rambus.com/chip-interface-ip-glossary/rmw/ Thu, 05 Feb 2026 22:31:28 +0000 https://www.rambus.com/?page_id=66018

Read-Modify-Write (RMW)

What is Read-Modify-Write (RMW)?

Read-Modify-Write (RMW) is a memory operation commonly used in computing systems where a processor or controller reads a data value from memory, modifies it, and writes the updated value back—all as a single atomic transaction. This technique is essential in multi-threaded and multi-core environments to ensure data consistency and prevent race conditions during concurrent access to shared memory.

How RMW works

The RMW cycle begins with a read of the target memory location. The processor then performs a modification—such as incrementing a counter, toggling a bit, or applying a mask—and finally writes the result back to the same location. Crucially, the entire sequence is treated as atomic, meaning no other thread or device can access or alter the data in between the read and write steps. This is typically enforced using hardware-level locking or cache coherency protocols.

In memory controllers, RMW is often used when updating partial data in a memory word, especially in systems with ECC (Error Correction Code). For example, modifying a single byte in a 64-bit word requires reading the full word, updating the byte, recalculating ECC, and writing the entire word back.

What are the key features of RMW?

  • Supports partial updates to memory blocks
  • Integrated with ECC and parity logic
  • Common in cache controllers and memory subsystems
  • Enables synchronization primitives (e.g., compare-and-swap, fetch-and-add)
  • Critical for transactional memory and lock-free data structures
 

What are the benefits of RMW?

  • Atomicity: Prevents data corruption in concurrent environments.
  • Data Integrity: Ensures ECC and parity bits are correctly updated.
  • Efficiency: Reduces the need for multiple memory transactions.
  • Consistency: Maintains coherent memory views across processors and devices.

 

Enabling Technologies

RMW operations are foundational in:

  • Multi-core processors with shared memory
  • Memory controllers for DDR4, DDR5, and LPDDR5
  • Cache-coherent interconnects like CXL and AXI
  • Operating systems and hypervisors managing concurrent threads
  • Embedded systems requiring deterministic behavior
 

Rambus Technologies

Rambus offers Interface IP solutions that support Read-Modify-Write add-on cores with our GDDR and LPDDR solutions. To learn more about the add-on core, click here.

]]>