HDD Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Wed, 16 Dec 2020 11:18:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The importance of understanding bandwidth https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/ https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/#respond Mon, 21 Sep 2015 16:39:47 +0000 https://www.rambusblog.com/?p=1069 Did you know that the terms “latency” and “bandwidth” are frequently misused?

According to Loren Shalinsky, a Strategic Development Director at Rambus, latency refers to how long the CPU needs to wait before the first data is available. Meanwhile, bandwidth describes how fast additional data can be “streamed” after the first data point has arrived.

“Bandwidth becomes a bigger factor in performance when data is stored in ‘chunks’ rather than being randomly distributed,” Shalinsky wrote in a recently published Semiconductor Engineering article. “As an example, programming code tends to be random, as the code needs to respond to the specific input conditions. Large files, where perhaps megabytes or more of sequential data needs to be stored, would represent the other end of the spectrum.”

Read our primer: MACsec explained: Securing data in motion

As Shalinsky points out, modern computer systems adhere to a 4K-sector size, with large files broken up into easier-to-manage chunks of 4096 bytes. Interestingly, the concept of a sector size is actually a holdover from the original hard disk drives. (HDDs). Indeed, even solid-state drives (SSDs) adhere to this traditional paradigm, thereby maintaining compatibility with computer file systems.

To further illustrate the differences between bandwidth and latency, Shalinksy created a detailed chart (see below) that compares expected bandwidth with the bandwidth specified by manufacturers for common and up-and-coming memory solutions.

“For each of these examples, I assume the first access is to a random storage location and, therefore, the latency must be accounted for,” he explained. “Note that when accounting for latency, the calculated bandwidth often pales in comparison to the bandwidth specified in a product brief.”

Understanding application use cases, says Shalinsky, is critical to determining what type of memory is most appropriate for specific use cases. For example, let’s imagine a server running a database application with small record sizes of 1Kbyte in size that are rarely accessed sequentially. Essentially, this means latency dominates performance.

“[Yes], SSDs [do] provide a significant improvement over hard drives,” he continued. “However, their performance is still three orders of magnitude smaller than any DRAM-based memory systems. [Nevertheless], SSDs have continued to move closer to the CPU, reducing their latency along the way.”

However, while SSDs adhering to NVMe aim to lower latencies, this does little to actually affect the NAND device inside SSDs – with an inherent latency of tens to hundreds of microseconds. In fact, even the greater than 50% latency reduction touted by NVMe doesn’t mean the memory gap can be jumped.

“For a database where the record size gets larger, say 8 Kbytes in size, the calculated bandwidth does improve markedly – as the system can now take better advantage of the max bandwidth and spread the ‘cost’ of the latency over more bytes,” Shalinsky confirmed. “By being very strategic in the placement of the data (e.g. for record sizes that are in the megabyte range), all of these systems have the capability of continuously streaming the data, and then bandwidths begin to approach the specified max bandwidth.”

As we noted above, understanding application use cases is critical to determining what type of memory is most appropriate for specific use cases. For example, DRAM-based memory systems are a good fit when it comes to maximizing performance for random operations.

“If you need memory for large records, consider what your budget allows and how much memory capacity and bandwidth you really need. Then you can make an informed decision,” Shalinsky concluded.

]]>
https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/feed/ 0
Building power conscious data centers https://www.rambus.com/blogs/building-power-conscious-data-centers-2/ https://www.rambus.com/blogs/building-power-conscious-data-centers-2/#respond Tue, 11 Nov 2014 16:07:41 +0000 https://www.rambusblog.com/?p=321 Writing for EnterpriseTech, George Leopold notes that data center energy consumption will only continue to increase in the near future – even as regulators attempt to rein in carbon emissions at coal-fired plants tasked with producing much of the electricity used to operate and cool data centers.

To further complicate matters, says Leopold, a recent industry audit determined that investments aimed at improving data center power usage efficiency (PUE) are hitting a wall.

“If business is going well and the information explosion and ‘Internet of Things’ continues, then there will be more data processing tomorrow than there is today,” Bob Landstrom, director of product management for U.K.-based colocation specialist Interxion, told EnterpriseTech.

Building power conscious data centers

“Even if every data center in the world is running with a [power usage efficiency] of 1.00, using no energy at all for mechanical cooling, security, or coffee pots – datacenters will demand more energy in the future than is the case today.”

Loren Shalinsky, a Strategic Development Director at Rambus, notes that data centers currently account for 3% of worldwide power consumption, up from an estimated 1.5% just a few years ago. Interestingly enough, memory in data centers, including overhead, accounts for about 20% of total data center power consumption.

“Data centers are becoming more efficient, yet still maintain a PUE of ~1.65. After CPUs, DRAM and HDDs are the next two biggest consumers of the total energy used in datacenter, not including data center overhead such as cooling and AC/DC-DC/DC losses,” he told Rambus Press.

“Industry analysts estimate that cooling and electrical losses in the datacenter represent a 65% overhead. There is additional overhead within the server, and therefore the total DRAM related energy corresponds to about 20% of data center requirements.”

The advent of next-gen memory solutions offers a genuine opportunity to improve overall efficiency. Indeed, the memory industry continues to demand more bandwidth and storage capacity – while simultaneously placing a strong focus on reducing power consumption.

“Recently introduced DDR4 memory will soon supplant the DDR3 memory currently in vogue in the data center. This should lower DRAM power consumption by more than 35%, leading to an overall data center power reduction of almost 8%,” Shalinsky added.

“Meanwhile, the use of a new signaling technology, as implemented in a Beyond DDR4 paradigm, could potentially reduce DRAM power consumption by another 30%. This means overall datacenter energy use – including overhead – would drop an additional 5%.”

]]>
https://www.rambus.com/blogs/building-power-conscious-data-centers-2/feed/ 0