Loren Shalinsky Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Wed, 16 Dec 2020 11:18:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Amazon’s X1 packs up to 2TB of memory https://www.rambus.com/blogs/mid-amazons-x1-packs-up-to-2tb-of-memory/ https://www.rambus.com/blogs/mid-amazons-x1-packs-up-to-2tb-of-memory/#respond Mon, 19 Oct 2015 16:04:35 +0000 https://www.rambusblog.com/?p=1131 Jeff Barr, the Chief Evangelist at Amazon Web Services, has confirmed that the company’s X1 instances will pack up to 2TB of memory.

“On the high end, many of our enterprise customers are clamoring for instances that have very large amounts of memory,” Barr explained in a recent blog post. “They want to run SAP HANA and other in-memory databases, generate analytics in real time, process giant graphs using Neo4j or Titan, or create enormous caches.”

dram1
As such, says Barr, X1 instances will feature up to 2 TB of memory, a full order of magnitude larger than the current generation of high-memory instances. Indeed, X1 instances are designed for demanding enterprise workloads including production installations of SAP HANA, Microsoft SQL Server, Apache Spark and Presto.

The X1 instances will be powered by up to four Intel Xeon E7 processors with high memory bandwidth and large L3 caches – both designed to support high-performance, memory-bound applications.

“With over 100 vCPUs, these instances will be able to handle highly concurrent workloads with ease,” he added.

According to Diane Bryant, senior VP and general manager of Intel’s data center group, the X1 is “the first real use industry-wide of Xeon e7 microprocessor in an infrastructure-as-a-service (IaaS) offering.”

Meanwhile, Amazon chief technology officer Werner Vogels confirmed the X1 instances pack Intel Haswell processors with more than 100 cores available.

Loren Shalinsky, a Strategic Development Director at Rambus, told us that including up to 2TB of memory per instance will help cover the current requirements of the vast majority of in-memory databases and other memory intensive applications.

“As Barr notes above, these include SAP HANA and other in-memory databases, the generation of analytics in real time, processing giant graphs using Neo4j or Titan, or the creation enormous caches,” he concluded.

]]>
https://www.rambus.com/blogs/mid-amazons-x1-packs-up-to-2tb-of-memory/feed/ 0
The 3MB of RAM in William Gibson’s Neuromancer https://www.rambus.com/blogs/mid-the-3mb-of-ram-in-william-gibsons-neuromancer/ https://www.rambus.com/blogs/mid-the-3mb-of-ram-in-william-gibsons-neuromancer/#respond Wed, 07 Oct 2015 16:31:17 +0000 https://www.rambusblog.com/?p=1104 Neuromancer, a 1984 cyberpunk novel by William Gibson, was the first winner of the science fiction triple crown: the Nebula Award, the Philip K. Dick Award and the Hugo Award. Marking the beginning of the Sprawl trilogy, the book tells the story of Case, a washed-up computer hacker hired by an enigmatic employer.

Brain-to-machine interface market targets hardcore gamers

According to Wikipedia, Neuromancer popularized such terms as cyberspace and ICE (Intrusion Countermeasures Electronics), with the groundbreaking novel heavily influencing The Matrix, a cyberpunk film which hit theaters in 1999. Moreover, speculative fiction author Jack Womack believes Gibson’s vision of cyberspace may very well have inspired the way in which the Internet evolved.

Although Neuromancer was clearly prescient in many ways, Johnny Ryan points out that Moore’s Law wasn’t fully actualized when it came to RAM in Gibson’s vision of his dystopian future.

“The novel’s protagonist lives in a far-distant future where technology has advanced almost beyond recognition. Yet he is betrayed for the sake of memory chips totaling 3 megabytes of random-access memory (RAM),” Ryan writes in A History of the Internet and the Digital Future. “The person who stole the RAM chips from his computer is later killed for the same 3MB of RAM. That Gibson considered 3MB a trove worth killing for in the bold future he conceived shows the galloping pace of technology change.”

By 2010, says Ryan, even many lightweight, portable computers were sold with a thousand times the amount of RAM than the characters in Neuromancer had killed and died for. Indeed, as we’ve previously discussed on Rambus Press, the IBM PC 5150, which was sold from 1981-1987 supported 16 kB ~ 256 kB of RAM. Essentially, this means the memory capabilities of common computers available to the masses have increased by one million times in a period of 24 years, or a ‘doubling’ in capacity about every two years.

[youtube https://www.youtube.com/watch?v=w5jxo_WRVGY]

“During the 1980’s, the infamous ‘640K [of RAM] ought to be enough for anybody’ quote was making the rounds in the computer world. Clearly, it is somewhat difficult, whether for a science fiction author or even experienced industry analysts, to precisely gauge the evolutionary cadence of a specific technology,” Loren Shalinsky, a Strategic Development Director at Rambus, told us. “We tend to focus on the very real problems that a particular technology is imminently facing – losing sight of the vast number of people looking for innovative ways of continuing the pace of technology evolution.”

It’s been more than 30 years since Neuromancer was written, says Shalinsky, and once again the price of the latest DRAM technology (DDR4), is close to achieving price parity with its (DDR3) predecessor, all while offering lower power consumption and higher performance.

“In the meantime, other memory technologies are lining up to be the successor,” he added. “They could take the route of a more evolutionary path of DDR5, adopt a higher bandwidth memory approach with HBM or HMC, or consider a technology that incorporates a new bit cell technology like ReRAM. At Rambus, we believe the industry needs to work together on developing next generation DDR solutions, while adhering to the goal of doubling current speed with minimal changes.”

]]>
https://www.rambus.com/blogs/mid-the-3mb-of-ram-in-william-gibsons-neuromancer/feed/ 0
Moore’s Law: From 16 kB to 16GB https://www.rambus.com/blogs/mid-moores-law-from-16-kb-to-16gb/ https://www.rambus.com/blogs/mid-moores-law-from-16-kb-to-16gb/#respond Tue, 29 Sep 2015 16:16:42 +0000 https://www.rambusblog.com/?p=1084 James Sanders of TechRepublic has confirmed that 16 GB SO-DIMM modules are now starting to become generally available from multiple vendors.

“[This] eases RAM constraints in devices that have a limited number of slots for RAM modules,” he explained.

istockram

“However, due to hardware limitations, these RAM modules do not work with all systems that are able to utilize lower density modules.”

According to Sanders, processors compatible with 16 GB modules include:

* Intel Skylake (6000-series) or Broadwell (5000-series)

* Intel Atom Avoton and Rangeley

* AMD processors that accept DDR3 RAM (except embedded G-Series)

* Tilera, Freescale, and Cavium processors that support DDR3 RAM

“Certain notebooks are thinner than previous generations by soldering components onto the main system board, rather than include slots for user-replaceable RAM. Among these include the ThinkPad T450s, which has 4 GB of DDR3 RAM onboard, but leaves one user-replaceable DDR3 slot,” he continued. “Because of this design choice, users of those laptops have been limited to a maximum of 12 GB RAM. With the availability of 16 GB modules, these systems can be configured from the factory (or modified by the end user) to use 20 GB RAM.”

In addition, says Sanders, servers such as the ARM-powered HP Moonshot m700 are good candidates for expanding RAM availability.

“Although the m700 can use four DDR3 modules, RAM use on a server can become particularly heavy, depending on the application — 64 GB would certainly be a welcome upgrade for many workloads,” he concluded.

Commenting on the above report, Loren Shalinsky, a Strategic Development Director at Rambus, told us that the increased adoption of 16 GB SO-DIMM modules illustrates just how far Moore’s Law has benefited the semiconductor industry over the years. Indeed, the IBM PC 5150, which was sold from 1981-1987 supported 16 kB ~ 256 kB of RAM, quite a world away from 16 GB SO-DIMM modules.

Bundesarchiv_B_145_Bild-F077948-0006,_Jugend-Computerschule_mit_IBM-PC

Image Credit: Engelbert Reineke, Wikipedia

“The memory capabilities of common computers available to the masses have increased by 1 million times in a period of 24 years, or a ‘doubling’ in capacity about every two years,” Shalinsky told Rambus Press. “Moore’s Law, or maybe more accurately, the industry’s desire to make Moore’s Law a reality, has been proven once again. We continue to see the switchover from DDR3 to DDR4, indicating industry requirements for higher capacity, higher performing memory has not yet been satisfied.”

]]>
https://www.rambus.com/blogs/mid-moores-law-from-16-kb-to-16gb/feed/ 0
Minding the memory gap https://www.rambus.com/blogs/minding-the-memory-gap-2/ https://www.rambus.com/blogs/minding-the-memory-gap-2/#respond Thu, 24 Sep 2015 16:28:00 +0000 https://www.rambusblog.com/?p=1079 Mark LaPedus of Semiconductor Engineering recently reported that memory chips and storage devices are struggling to keep pace with the growing demands of data processing.

“To solve the problem, chipmakers have been working on several next-generation memory types. [However], most technologies have been delayed or fallen short of their promises,” he explained.

rram1

“But after numerous delays, a new wave of next-generation, nonvolatile memories are finally here. One technology, 3D NAND, is shipping and gaining steam. And three others—Magnetoresistive RAM (MRAM), ReRAM and even carbon nanotube RAMs (NRAM)—are suddenly in the mix.”

As LaPedus emphasizes, not all technologies will ultimately be part of the evolving memory/storage hierarchy.

“For example, FeRAM remains stuck in the embedded market. Another technology, phase-change memory (PCM), appears to be fading from the picture,” said LaPedus. “MRAM, NRAM and ReRAM are more promising. But these technologies could meet the same fate as PCM (phase change memory) if they fail to hit the market window at a reasonable cost.”

According to Loren Shalinsky, a Strategic Development Director at Rambus, there is no shortage of ideas and research on how to fill the gaps in the memory hierarchy.

“It’s really all about finding the right combination of latency, bandwidth and price to match an application’s requirements and the budget of the user. Unfortunately, many of these new memories have been researched, but have proven difficult to bring to market,” he told Rambus Press. “Nevertheless, technologies like 3D NAND are gaining real market interest, as they are not trying to be a perfect memory, but rather a memory that is good enough to replace an existing technology (planar NAND), with a continued path to cost reduction.”

By approaching the market as a direct replacement technology, says Shalinsky, some of the above-mentioned memory standards are attempting to target the same applications as their respective predecessors.

“With NAND exceeding $30B a year, it’s really not a bad market to ‘replace,’” he added.

Meanwhile, other technologies, such as MRAM, RRAM, and 3D XPoint, are trying to fill the gap between NAND based systems and traditional DRAM.

“While the gap exists, it is quite large, meaning that there is room for multiple technologies to fill this gap,” Shalinsky concluded. “But since these technologies appear to be ‘in-between’ existing technologies, it may take some extra effort and time to get the market to accept a new memory usage model. Certainly having the backing of major semiconductor players is important and should help to minimize that acceptance time.”

]]>
https://www.rambus.com/blogs/minding-the-memory-gap-2/feed/ 0
The importance of understanding bandwidth https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/ https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/#respond Mon, 21 Sep 2015 16:39:47 +0000 https://www.rambusblog.com/?p=1069 Did you know that the terms “latency” and “bandwidth” are frequently misused?

According to Loren Shalinsky, a Strategic Development Director at Rambus, latency refers to how long the CPU needs to wait before the first data is available. Meanwhile, bandwidth describes how fast additional data can be “streamed” after the first data point has arrived.

“Bandwidth becomes a bigger factor in performance when data is stored in ‘chunks’ rather than being randomly distributed,” Shalinsky wrote in a recently published Semiconductor Engineering article. “As an example, programming code tends to be random, as the code needs to respond to the specific input conditions. Large files, where perhaps megabytes or more of sequential data needs to be stored, would represent the other end of the spectrum.”

Read our primer: MACsec explained: Securing data in motion

As Shalinsky points out, modern computer systems adhere to a 4K-sector size, with large files broken up into easier-to-manage chunks of 4096 bytes. Interestingly, the concept of a sector size is actually a holdover from the original hard disk drives. (HDDs). Indeed, even solid-state drives (SSDs) adhere to this traditional paradigm, thereby maintaining compatibility with computer file systems.

To further illustrate the differences between bandwidth and latency, Shalinksy created a detailed chart (see below) that compares expected bandwidth with the bandwidth specified by manufacturers for common and up-and-coming memory solutions.

“For each of these examples, I assume the first access is to a random storage location and, therefore, the latency must be accounted for,” he explained. “Note that when accounting for latency, the calculated bandwidth often pales in comparison to the bandwidth specified in a product brief.”

Understanding application use cases, says Shalinsky, is critical to determining what type of memory is most appropriate for specific use cases. For example, let’s imagine a server running a database application with small record sizes of 1Kbyte in size that are rarely accessed sequentially. Essentially, this means latency dominates performance.

“[Yes], SSDs [do] provide a significant improvement over hard drives,” he continued. “However, their performance is still three orders of magnitude smaller than any DRAM-based memory systems. [Nevertheless], SSDs have continued to move closer to the CPU, reducing their latency along the way.”

However, while SSDs adhering to NVMe aim to lower latencies, this does little to actually affect the NAND device inside SSDs – with an inherent latency of tens to hundreds of microseconds. In fact, even the greater than 50% latency reduction touted by NVMe doesn’t mean the memory gap can be jumped.

“For a database where the record size gets larger, say 8 Kbytes in size, the calculated bandwidth does improve markedly – as the system can now take better advantage of the max bandwidth and spread the ‘cost’ of the latency over more bytes,” Shalinsky confirmed. “By being very strategic in the placement of the data (e.g. for record sizes that are in the megabyte range), all of these systems have the capability of continuously streaming the data, and then bandwidths begin to approach the specified max bandwidth.”

As we noted above, understanding application use cases is critical to determining what type of memory is most appropriate for specific use cases. For example, DRAM-based memory systems are a good fit when it comes to maximizing performance for random operations.

“If you need memory for large records, consider what your budget allows and how much memory capacity and bandwidth you really need. Then you can make an informed decision,” Shalinsky concluded.

]]>
https://www.rambus.com/blogs/mid-the-importance-of-understanding-bandwidth/feed/ 0
Memory price dip to spur DDR4 adoption https://www.rambus.com/blogs/mid-memory-price-dip-to-spur-ddr4-adoption/ https://www.rambus.com/blogs/mid-memory-price-dip-to-spur-ddr4-adoption/#respond Wed, 16 Sep 2015 16:21:32 +0000 https://www.rambusblog.com/?p=1059 KitGuru’s Anton Shilov reports that DDR4 prices have dropped approximately 25% since late June.

“According to DRAMeXchange, the world’s leading computer memory tracker, one 4Gb DDR4 chip rated to run at 2133MHz cost $3.618 on the spot market on the 28th of June 2015,” he explained.

rplus

“The average price of such chip dropped to $3.302 on the 1st of August. At present such chip costs $2.719, or about 25 per cent less than in late June.”

Meanwhile, the cost of DDR3 memory is also decreasing, with the price differential between the cost of a 4Gb DDR3 1600MHz chip ($2.217) and a 4Gb DDR4 2133MHz chip pegged at around 30 per cent.

“The price of DRAM memory chips directly affects pricing of memory modules,” Shilov confirmed. “As a result, DDR4 DIMMs cost less than DDR3 modules a year ago.”

Commenting on the report, Loren Shalinsky, a Strategic Development Director at Rambus, notes that DRAM prices have finally started to decrease after an extended period without major fluctuations.

“Shilov points to DDR4 coming down about 25% since June. We’ve also seen a corresponding, albeit slightly smaller, drop in DDR3 prices of about 19%,” Shalinsky told Rambus Press. “According to DRAMeXchange data, the delta between DDR3 and DDR4 is now only about 20% – although DDR4 offers higher performance at that price. We expect this price delta to continue to shrink, helping to further drive adoption of DDR4.”

As a point of comparison, says Shalinsky, the first DDR3 product (1Gb) was introduced in late 2009. By the end of 2010, iSuppli tracked a price drop of nearly 50%.

“Manufacturing node shrinks are becoming increasingly difficult,” he added. “However, shrinks are still being driven into the marketplace, which helps increase production capacity and gives the manufacturers the ability to charge lower prices.”

Nevertheless, it should be noted that DRAM prices are still higher than they were back in 2012 – a year marked by extremely low margins for memory manufacturers.

]]>
https://www.rambus.com/blogs/mid-memory-price-dip-to-spur-ddr4-adoption/feed/ 0
Understanding the memory-storage pyramid https://www.rambus.com/blogs/understanding-the-memory-storage-pyramid-2/ https://www.rambus.com/blogs/understanding-the-memory-storage-pyramid-2/#respond Thu, 27 Aug 2015 17:11:24 +0000 https://www.rambusblog.com/?p=1019 Loren Shalinsky, a Strategic Development Director at Rambus, recently penned a detailed article for Semiconductor Engineering that explores the memory-storage hierarchy.

As he puts it, the hierarchy, or pyramid, is a particularly succinct method of understanding computer systems and the dizzying array of memory options available to the system designer.

“Many different parameters characterize the memory solution,” Shalinsky explained. “Among them are latency (how long the CPU needs to wait before the first data is available) and bandwidth (how fast additional data can be ‘streamed’ after the first data point has arrived), although by my count there are more than 10 different parameters to measure.”

memorypyramid

As expected, no single memory sub-system can be considered “best” in all categories. As such, various memory solutions are routinely exploited at different levels of the hierarchy to achieve optimized results. For example, high-end systems, such as servers found in datacenters, are most likely to leverage solutions from every level in the hierarchy.

While not changing the relative placement on the pyramid (see above), memory systems continue to evolve at a steady cadence. As such, future DIMM subsystem improvements are perhaps the easiest to imagine. To be sure, DRAM latency has not changed much over the years, although DRAM data rates continue to increase with an eye on more capacity and bandwidth.

New memory technologies such as HBM or HMC, says Shalinsky, can be sandwiched in-between DIMMs and on-chip memories – with the ability to place gigabytes of data even closer to the CPU than a DIMM.

“Going back 5-10 years, Solid State Drives (SSDs) started to fill the huge gap that originally existed between DIMMs and hard drives,” he continued. “[However], the underlying NAND technology performance has somewhat leveled off (but made extraordinary progress in price reduction), and has therefore left the door open for additional technologies to fill the remaining gaps.”

To be sure, 3D XPoint technology, announced by Intel and Micron earlier this month, seems to be targeting these very gaps.

“While technical details are scarce, we can piece together enough data points to surmise that 3D XPoint could fill one of the two blank levels currently in between SSDs and DIMMS,” he added. “Even with the addition of 3D XPoint, many gaps will continue to exist in the memory hierarchy, leaving no shortage of research avenues for companies in the memory industry.”

It should be noted that Shekhar Borkar, Intel Fellow and director of extreme-scale technologies, recently told The Platform DRAM will be regarded as a first-level, high capacity memory for years to come.

“The bottom line is that for the next ten years, if I am a node designer, I will rely on DRAM as a first-level, high capacity memory, followed by NAND or PCM as the next level for storage,” he said. “Everything else – keep working on it, and when it is ready, I will use it. Today, you are not ready.”

]]>
https://www.rambus.com/blogs/understanding-the-memory-storage-pyramid-2/feed/ 0
Is DRAM adhering to Moore’s Law? https://www.rambus.com/blogs/mid-is-dram-adhering-to-moores-law/ https://www.rambus.com/blogs/mid-is-dram-adhering-to-moores-law/#respond Tue, 28 Jul 2015 16:03:37 +0000 https://www.rambusblog.com/?p=901 Writing for PC Magazine, Michael J. Miller notes that although most of the discussion around Moore’s Law has thus far focused on logic chips, the memory industry has clearly entered a transitional stage.

“DRAM shrinks have slowed dramatically. Most of the makers are now in the transition to 20nm DRAM with perhaps one or two more generation left to go,” Miller explained.

rplus

“Any further advances in density or cost will then have to come from additional manufacturing capacity, larger wafer sizes (450mm), 3D chip stacking (Hybrid Memory Cubes), or perhaps eventually a new type of memory altogether such as MRAM.”

Commenting on the above, Loren Shalinsky, a Strategic Development Director at Rambus, told us that despite a certain amount of industry consternation, Moore’s Law is certainly still alive.

“As Miller himself points out, ‘the reports of the Law’s death have been greatly exaggerated,’” he said. “To be sure, while the transition time from node to node shifts over time, it’s really no different than Moore’s original observation, which shifted from 1 year to 2 years. Plus, specific semiconductor verticals have always progressed at their own individual cadence.”

According to Shalinsky, one large flash vendor was once capable of doubling the storage capacity on flash for nearly 5 years in a row.

“This was accomplished via a combination of quick full node transitions and the introduction of MLC – storing 2 bits of data in a single memory cell,” he continued.

While this feat of doubling storage capacity so quickly is unlikely to be repeated, it does indicate that different technologies can go through rapid transition phases.

“3D NAND maintains the tradition of more transistors per die (area), by pushing a lever other than node size by stacking transistors on top of each other. This hails back to the original concept of Moore’s Law: the number of transistors doubling in a given time frame.”

Indeed, as Miller explains, 3D NAND uses multiple layers of memory cells fabricated with very thin, uniform films. Simply put, this means the feature sizes of the individual cells no longer need to be so small, although the density continues to scale – potentially to 1 terabit on a chip – by adding more layers.

“At Semicon West, the equipment companies said the transition to 3D NAND is happening more quickly than expected, and by some estimates, 15 percent of the world’s capacity by bits will have shifted by the end of this year,” he concluded.

]]>
https://www.rambus.com/blogs/mid-is-dram-adhering-to-moores-law/feed/ 0
Server market growth tied to increased memory demand https://www.rambus.com/blogs/server-market-growth-tied-to-increased-memory-demand-2/ https://www.rambus.com/blogs/server-market-growth-tied-to-increased-memory-demand-2/#respond Thu, 16 Jul 2015 16:02:10 +0000 https://www.rambusblog.com/?p=881 Loren Shalinsky, a Strategic Development Director at Rambus, recently penned an article for Semiconductor Engineering that explores how server market growth has prompted a salient increase in memory demand.

“A high-end server can have 48 or more DIMM slots, providing nearly 200x the memory capacity as a standard PC. A server not only requires more memory, but also higher bandwidth memory,” he explained.

server2020

“The industry is currently transitioning to DDR4 memory, which will eventually lead to speeds that are 50% higher than the older, DDR3 memory, and with improved power efficiency.”

According to Shalinsky, the explosion of information being generated and processed in Big Data applications – such as real-time analytics, virtualization and in-memory databases – is staggering and it has only just begun.

“These massive amounts of data will continue to put pressure on data center and enterprise servers for more bandwidth and capacity,” he said.

“The combination of these trends will continue to fuel growth for both servers and memory, painting a future outlook that is anything but bleak.”

As we’ve previously discussed on Rambus Press, DDR4 will continue to ramp on the server before finding its way into desktop PCs, laptops, and consumer applications like digital TVs and set-top boxes.

Concurrently, the cost of DDR4 will steadily decrease, ultimately reaching price parity with DDR3 when it will become the de-facto choice for consumer products.

As Shalinsky told us in June, DDR3 memory has been the workhorse for main memory since 2009.

“It’s had a good run, but the industry is poised to let its successors take over,” he added.

]]>
https://www.rambus.com/blogs/server-market-growth-tied-to-increased-memory-demand-2/feed/ 0
Report: Intel Skylake Xeons could feature 28 cores, 6 memory channels https://www.rambus.com/blogs/report-intel-skylake-xeons-could-feature-28-cores-6-memory-channels-2/ https://www.rambus.com/blogs/report-intel-skylake-xeons-could-feature-28-cores-6-memory-channels-2/#respond Thu, 04 Jun 2015 16:01:49 +0000 https://www.rambusblog.com/?p=803 ExtremeTech’s Joe Hruska recently analyzed a set of leaked slides that suggest Intel’s plans for its upcoming Xeon cores may “stretch farther into the stratosphere” than originally predicted.

“[The] new data purports to show Intel’s roadmap for 2015 and beyond, stretching all the way to 28 cores and 6 memory channels per CPU,” he explained.

Image Credit: ExtremeTech

“The differentiation shown above can be broadly broken down by year, with the introduction of Broadwell-EP in 2015 with up to 22 cores, a Broadwell-EX in 24-core flavor, and finally, Purley, with a Skylake-based CPU and up to 28 CPU cores.”

As Hruska notes, the memory channels also take a jump in this version, with clock speeds up to DDR4-2667.

“It makes sense that Intel would actually bump up the per-CPU memory channels — it keeps the ratio of cores to memory channels roughly similar to the current E7 Xeons, which feature up to 18 CPU cores and have four channels per chip,” said Hruska.

“The net effect of these gains should be most significant in the high-end PC and HPC space. AVX-512 is based-on previous versions of AVX, but isn’t compatible with the 512-bit extensions currently used on Xeon Phi.”

According to Hruska, AVX-512 compatibility in Skylake would imply that Intel widened its registers up to 512-bits for the Xeon flavor of the chip – or fused this capability off in the consumer version of Skylake (if it doesn’t come to market).

Image Credit: ExtremeTech

“In theory, this would push Intel CPUs up to 32 FPU instructions per clock per core — 4x what Nehalem offered just 10 years earlier,” he added. “[Of course], actually taking advantage of all that theoretical firepower is more complicated.”

Commenting on the above-mentioned slides, Loren Shalinsky, a Strategic Development Director at Rambus, said that Skylake’s reported capabilities clearly illustrate the continued progression of Moore’s Law.

“The never-ending demand for increased bandwidth and capacity – within a reasonable power envelope – continues unabated. Memory is an essential part of this paradigm, which is why Skylake is primed for DDR4,” he explained.

“DDR4 memory delivers a 40-50 percent increase in bandwidth, along with a 35 percent reduction in power consumption compared to DDR3 memory (currently in servers).”

To get a clear picture of the rise in memory bandwidth to accompany the CPU core rise, says Shalinsky, we need to look at the number of memory channels as well as the bandwidth provided by each channel.

Image Credit: ExtremeTech

“In 2010, Nehalem featured 8 cores and 3 memory channels running at 1066Mbits/second. With these purported Skylake features, the maximum core to memory bandwidth ratio has been steadily increasing and is now 43% higher than it was with Nehalem and a modest 10%-15% more than the Broadwell-EX. The bandwidth afforded by DDR4 at speeds of up to 2667 is key to ensuring the memory bandwidth can keep up with the CPUs,” he concluded.

]]>
https://www.rambus.com/blogs/report-intel-skylake-xeons-could-feature-28-cores-6-memory-channels-2/feed/ 0