memory solutions Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Wed, 16 Dec 2020 11:18:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The importance of understanding bandwidth https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/ https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/#respond Mon, 21 Sep 2015 16:39:47 +0000 https://www.rambusblog.com/?p=1069 Did you know that the terms “latency” and “bandwidth” are frequently misused?

According to Loren Shalinsky, a Strategic Development Director at Rambus, latency refers to how long the CPU needs to wait before the first data is available. Meanwhile, bandwidth describes how fast additional data can be “streamed” after the first data point has arrived.

“Bandwidth becomes a bigger factor in performance when data is stored in ‘chunks’ rather than being randomly distributed,” Shalinsky wrote in a recently published Semiconductor Engineering article. “As an example, programming code tends to be random, as the code needs to respond to the specific input conditions. Large files, where perhaps megabytes or more of sequential data needs to be stored, would represent the other end of the spectrum.”

Read our primer: MACsec explained: Securing data in motion

As Shalinsky points out, modern computer systems adhere to a 4K-sector size, with large files broken up into easier-to-manage chunks of 4096 bytes. Interestingly, the concept of a sector size is actually a holdover from the original hard disk drives. (HDDs). Indeed, even solid-state drives (SSDs) adhere to this traditional paradigm, thereby maintaining compatibility with computer file systems.

To further illustrate the differences between bandwidth and latency, Shalinksy created a detailed chart (see below) that compares expected bandwidth with the bandwidth specified by manufacturers for common and up-and-coming memory solutions.

“For each of these examples, I assume the first access is to a random storage location and, therefore, the latency must be accounted for,” he explained. “Note that when accounting for latency, the calculated bandwidth often pales in comparison to the bandwidth specified in a product brief.”

Understanding application use cases, says Shalinsky, is critical to determining what type of memory is most appropriate for specific use cases. For example, let’s imagine a server running a database application with small record sizes of 1Kbyte in size that are rarely accessed sequentially. Essentially, this means latency dominates performance.

“[Yes], SSDs [do] provide a significant improvement over hard drives,” he continued. “However, their performance is still three orders of magnitude smaller than any DRAM-based memory systems. [Nevertheless], SSDs have continued to move closer to the CPU, reducing their latency along the way.”

However, while SSDs adhering to NVMe aim to lower latencies, this does little to actually affect the NAND device inside SSDs – with an inherent latency of tens to hundreds of microseconds. In fact, even the greater than 50% latency reduction touted by NVMe doesn’t mean the memory gap can be jumped.

“For a database where the record size gets larger, say 8 Kbytes in size, the calculated bandwidth does improve markedly – as the system can now take better advantage of the max bandwidth and spread the ‘cost’ of the latency over more bytes,” Shalinsky confirmed. “By being very strategic in the placement of the data (e.g. for record sizes that are in the megabyte range), all of these systems have the capability of continuously streaming the data, and then bandwidths begin to approach the specified max bandwidth.”

As we noted above, understanding application use cases is critical to determining what type of memory is most appropriate for specific use cases. For example, DRAM-based memory systems are a good fit when it comes to maximizing performance for random operations.

“If you need memory for large records, consider what your budget allows and how much memory capacity and bandwidth you really need. Then you can make an informed decision,” Shalinsky concluded.

]]>
https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/feed/ 0
Rambus to Present Innovative Mobile Memory Technology at Linley Tech Mobile Conference 2014 https://www.rambus.com/rambus-to-present-innovative-mobile-memory-technology-at-linley-tech-mobile-conference-2014/ Wed, 30 Apr 2014 06:10:33 +0000 https://www.rambus.com/?p=11529

Session will focus on advanced memory solutions in fast-growth mobile device markets

Santa Clara, Calif. – April 30, 2014

Who: Rambus Inc. (NASDAQ: RMBS)
Where: Linley Tech Mobile Conference 2014
Santa Clara, Calif
When: April 30- May 1, 2014

At the Linley Tech Mobile Conference 2014, Rambus Director Ajay Jain will participate in a session demonstrating recent innovative mobile memory solutions for smartphones and tablets, targeting a mid-range market segment that continues to be among the fastest-growing in mobile computing. Jain will highlight a memory solution that uses enhanced signaling techniques to reduce memory subsystem power by up to 25% versus standard LPDDR3 while retaining backward compatibility.

Rambus will also be exhibiting at the Conference, which brings together industry expertise to address system design issues for mobile devices.

Click to Tweet:.@rambusinc will be presenting and sponsoring at the@LinleyGroup Mobile Tech Conference 2014 on 4/30-5/1#Innovation

Presentation:

Session Title: Mobile Memory Solutions for Smartphones and Tablets
12:30 p.m. – 12:30 p.m. AST
Ajay Jain, director of product marketing, Mobile Products, Rambus

Midrange smartphones and tablets are projected to be the fastest-growing segment in mobile computing. Lower device cost may drive this growth, but consumers are unwilling to compromise on performance, capacity, or battery life. This presentation will describe a memory solution that uses enhanced signaling techniques to reduce memory subsystem power up to 25% versus standard LPDDR3 while retaining backward compatibility. By implementing enhanced signaling techniques similar to LPDDR4, SoC developers can achieve increased performance at improved power efficiency with the cost advantages of LPDDR3.

About Rambus Inc.

Rambus brings invention to market. Our customizable IP cores, architecture licenses, tools, services, and training improve the competitive advantage of our customer’s products while accelerating their time-to-market. Rambus products and innovations capture, secure and move data. For more information, visit rambus.com.

]]>