General Archives - Rambus At Rambus, we create cutting-edge semiconductor and IP products, providing industry-leading chips and silicon IP to make data faster and safer. Tue, 16 Sep 2025 18:48:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 [Infographic]: The Powerful Technologies that Enable Systems like ChatGPT to Thrive https://www.rambus.com/blogs/infographic-the-powerful-technologies-that-enable-systems-like-chatgpt-to-thrive/ https://www.rambus.com/blogs/infographic-the-powerful-technologies-that-enable-systems-like-chatgpt-to-thrive/#respond Tue, 12 Mar 2024 20:59:04 +0000 https://www.rambus.com/?post_type=blogs&p=63981 Generative AI has been making waves in the tech industry. The capability to understand context and perform tasks like creating and summarizing content with astonishing accuracy in seconds showcases the cutting-edge potential that generative AI has to transform business processes.

Have you ever thought about the technologies that enable generative AI, including Chat GPT and Google Bard? Semiconductor technologies like DDR5, High-bandwidth Memory (HBM), GDDR, and PCI Express are critical in the training and deployment of generative AI.

Security will be another essential requirement as Generative AI proliferates to the edge and increasingly to client systems and smart end points. Safeguarding AI data and assets will require security anchored in hardware.

Check out the Rambus infographic below, “The Powerful Technologies that Enable Systems like ChatGPT to Thrive” to learn more.

Read this infographic to learn about the powerful technologies that enable systems like ChatGPT to thrive

]]>
https://www.rambus.com/blogs/infographic-the-powerful-technologies-that-enable-systems-like-chatgpt-to-thrive/feed/ 0
Brad Burke of Rambus Named to the Top 10 Semiconductor Engineering Directors in 2023 by Semiconductor Review https://www.rambus.com/blogs/brad-burke-of-rambus-named-to-the-top-10-semiconductor-engineering-directors-in-2023-by-semiconductor-review/ https://www.rambus.com/blogs/brad-burke-of-rambus-named-to-the-top-10-semiconductor-engineering-directors-in-2023-by-semiconductor-review/#respond Mon, 08 Jan 2024 21:34:58 +0000 https://www.rambus.com/?post_type=blogs&p=63732 Top 10 Semiconductor Engineering Directors in 2023 by Semiconductor Review awardWe are proud to announce that our very own Brad Burke has been named to the Top 10 Semiconductor Engineering Directors in 2023 by the leading semiconductor publication, Semiconductor Review. The “magazine is recognizing the top 10 semiconductor engineering directors in 2023 who have made technology and process investment in a seamless manner.” We are excited to have one of our engineering leaders included alongside individuals from other esteemed semiconductor companies for this industry recognition.

Brad Burke headshotBrad is a senior director of engineering here at Rambus, responsible for the development of system-on-a-chip (SOC) products, enabling new memory tiers to further expand server memory bandwidth and capacity for next-generation CXL workloads in the data center.

Read on to hear some thoughts from Brad on semiconductor trends, “The Need to Move to an All Cloud-Based Approach”. He also discusses advice he’s given to young professionals interested in pursuing a similar career, key challenges and what it takes to be successful.

Additional Resources:
Award
Article

]]>
https://www.rambus.com/blogs/brad-burke-of-rambus-named-to-the-top-10-semiconductor-engineering-directors-in-2023-by-semiconductor-review/feed/ 0
Rambus Joins Arm Total Design https://www.rambus.com/blogs/rambus-joins-arm-total-design/ https://www.rambus.com/blogs/rambus-joins-arm-total-design/#respond Thu, 19 Oct 2023 16:09:28 +0000 https://www.rambus.com/?post_type=blogs&p=63371 Generative AI and other advanced workloads bring even greater urgency to accelerate the power of computing. Training models are scaling by an incredible 10X per year, with the largest now , and are showing no sign of slowing. At the same time, AI inference is pushing out from the data center to millions and ultimately billions of increasingly powerful AI-enabled edge and end point devices. The pressure grows to design and deliver the enabling SoCs for this new reality with greater efficiency and faster time to market.

Arm Total Design brings together ecosystem partners from across the industry committed to frictionless delivery of custom SoCs based on the Arm Neoverse Compute Subsystems (CSS). Neoverse CSS provides Arm technology in a new way, delivering pre-integrated and pre-verified solutions which lower the cost of development and speed time to market. Members of the Arm Total Design ecosystem – IP suppliers, ASIC design houses, EDA tool providers, foundries, and firmware developers – are working together to accelerate and simplify the development of Neoverse CSS-based systems.

“Arm Total Design will empower the entire industry to innovate around Neoverse CSS and build custom silicon optimized for specific use cases including AI, cloud, networking and the edge,” said Mohamed Awad, senior vice president and general manager, Infrastructure Line of Business at Arm. “Rambus brings unique expertise with its industry-leading interface and security IP and will enable collaboration across the broader ecosystem to accelerate the implementation of more specialized solutions.”

As a leading provider of Silicon IP, Rambus is proud to be part of the Arm Total Design ecosystem. “As the complexity of high-performance silicon for data center compute and infrastructure networks continues to rise to meet the needs of advanced workloads, it’s critical that the ecosystem works together to ease the implementation of these advanced chips,” said Neeraj Paliwal, GM of Silicon IP at Rambus. “We’re proud to be part of Arm Total Design to help accelerate our customers’ timeline to production silicon by supplying them with best-in-class high performance interface controller and security IP designs.”

The Rambus Silicon IP portfolio provides the performance and security needed for advanced computing workloads whether in the heart of the data center, at the edge or in endpoint devices. Our interface IP portfolio includes a full suite of high-performance memory controller IP for HBM, GDDR, LPDDR and DDR, as well as digital controllers for PCIe, CXL and MIPI high-speed interconnects. The Rambus security IP offering is the industry’s broadest with security IP solutions that protect hardware and data across the entire semiconductor life cycle.

 

Additional Resources
Arm blog:  Harnessing the power of the ecosystem in the era of custom silicon on Arm

]]>
https://www.rambus.com/blogs/rambus-joins-arm-total-design/feed/ 0
Memory Key to Enabling AI: A Recap of AI Hardware Summit https://www.rambus.com/blogs/focus-on-memory-at-ai-hardware-summit/ https://www.rambus.com/blogs/focus-on-memory-at-ai-hardware-summit/#respond Mon, 18 Sep 2023 21:08:57 +0000 https://www.rambus.com/?post_type=blogs&p=63300 Last week, I had the pleasure of hosting a panel at the AI Hardware & Edge AI Summit on the topic of “Memory Challenges for Next-Generation AI/ML Computing.” I was joined by David Kanter of MLCommons, Brett Dodds of Microsoft, and Nuwan Jayasena of AMD, three accomplished experts that brought differing views on the importance of memory for AI/ML. Our discussion focused on some of the challenges and opportunities for DRAMs and memory systems. As the performance requirements for AI/ML continue growing rapidly, the importance of memory continues to grow as well.

In fact, we’re seeing demands for “all of the above” when it comes to memory for AI, specifically:

  • More capacity – model sizes are huge and growing rapidly. David cited embedding tables used by Baidu in their recommender system requiring 10 TB. Assets of that magnitude require a growing amount of DDR main memory capacity.
  • More bandwidth – with the enormous amount of data to be moved, we’re witnessing the continued race to higher data rates across all DRAM types to provide more memory bandwidth.
  • Lower latency – another aspect of this need for speed is lower latency so processor cores aren’t left idle waiting for data.
  • Lower power – unfortunately, we’re running up against the limits of physics, and power has become an important limiter in AI systems. The demand for higher data rates is driving up power consumption. To mitigate this, IO   voltages are being reduced, but this lowers voltage margins and increases the chance of errors, which bring us to…
  • Higher reliability – to address increasing error rates at higher speeds, lower voltages, and smaller process geometries, we’re seeing increasing use of on-die ECC and advanced signaling techniques to compensate.

Another big topic we discussed was the challenges and opportunities for new memory technologies in AI. New technologies have many potential benefits, including:

  • Optimizing capacity, bandwidth, latency, and power for a focused set of use cases. AI is a large and important market with a lot of money behind it, a great combination that can drive the development of new memory technologies. In the past, GDDR (developed for the graphics market), LPDDR (developed for the mobile market), and HBM (developed for high-bandwidth applications including AI) have been created to meet the needs of use cases that could not be satisfied with existing memories.
  • CXL™ – CXL offers the opportunity to greatly scale up memory capacity and improve bandwidth, while also abstracting the memory type from the processor. In this way, CXL provides a great interface for incorporating new memory technologies. The CXL memory controller provides the translation layer between the processor and memory, allowing a new memory tier to be inserted after locally attached memory.

While new memory types targeting specific use cases can be beneficial for many applications, they face additional challenges:

  • DRAM, on-chip SRAM, and Flash memory are here to stay for the foreseeable future, so don’t expect anything to completely replace any of these technologies. Yearly R&D and Capex investment in these technologies, together with decades of experience in high-yield manufacturing, make it essentially impossible to replace any of these technologies in the near-term. Any new memory technology must work well together with these memories in order to be adopted.
  • The scale of AI deployments and risk associated with developing new memory technologies make it difficult to adopt brand new memories. The timeline for memory development is typically 2-3 years, but AI is advancing so fast it can be difficult to predict specific features that may be needed that far into the future. The stakes are high, and so is the risk of relying on a new technology being enabled and available.
  • The performance benefits of any new technology must be high enough to offset any additional cost and risk. Given the demands on infrastructure engineering and deployment teams, this translates to a very high hurdle that new memory technologies need to overcome.

Memory will continue to be a key enabler for future AI systems. Our industry must continue to innovate for future systems to deliver faster and more meaningful AI, and the industry is responding.

]]>
https://www.rambus.com/blogs/focus-on-memory-at-ai-hardware-summit/feed/ 0
Rambus Moderates Panel on “Memory Challenges for Next-Generation AI/ML Computing” at AI Hardware Summit https://www.rambus.com/blogs/rambus-moderates-panel-on-memory-challenges-for-next-generation-ai-ml-computing-at-ai-hardware-summit/ https://www.rambus.com/blogs/rambus-moderates-panel-on-memory-challenges-for-next-generation-ai-ml-computing-at-ai-hardware-summit/#respond Mon, 11 Sep 2023 16:59:15 +0000 https://www.rambus.com/?post_type=blogs&p=63278 Dr. Steven Woo, distinguished inventor and fellow at Rambus will be moderating an upcoming panel at the AI Hardware Summit on Tuesday, September 12th, 2023 starting at 3:00pm PT at the Santa Clara Marriott.

Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in high performance memories like HBM and GDDR, as well as mainstream memories like DDR. Emerging memories and new technologies like CXL offer additional possibilities for improving the memory hierarchy. In this panel, we’ll discuss important enabling technologies and key challenges the industry needs to address for memory systems going forward. Hear fellow ecosystem leaders from Microsoft and AMD discuss these critical topics.

We look forward to seeing you at AI Hardware Summit! And if you can’t make it, but are interested in reading about what happens, stay tuned for our recap blog after the event!

]]>
https://www.rambus.com/blogs/rambus-moderates-panel-on-memory-challenges-for-next-generation-ai-ml-computing-at-ai-hardware-summit/feed/ 0
Rambus AES-32 Cryptographic Accelerator IP Core Is Common Criteria Certified https://www.rambus.com/blogs/rambus-aes-32-cryptographic-accelerator-ip-core-is-common-criteria-certified/ https://www.rambus.com/blogs/rambus-aes-32-cryptographic-accelerator-ip-core-is-common-criteria-certified/#respond Mon, 10 Oct 2022 20:23:28 +0000 https://www.rambus.com/?post_type=blogs&p=62049 Following on from the recent news that Rambus HQ has been Common Criteria (CC) certified, we are pleased to announce that our AES-32 cryptographic accelerator IP core has also been CC certified. This is the first cryptographic soft IP core to be evaluated under the CC scheme and signals a growing demand for IP solutions that meet the highest security standards. 

Common Criteria, also known as ISO 15408, is an international standard to which security products are evaluated. Common Criteria operates using Evaluation Assurance Levels (EALs) ranging from EAL1 to EAL7, with EAL4 to EAL7 being the highest levels of certification.

The Rambus AES-ECB-32-DPA-FIA soft IP core has been certified by TÜV Rheinland under the Netherlands Scheme for Certification in the Area of IT Security (NSCIB). The IP has been certified as meeting the EAL4+ assurance level for Vulnerability Assessment at AVA_VAN.5 level, Life Cycle Support for ALC_DVS.2 level, and Tests for ATE_DPT.2 level.

As CC evaluations conducted in one country are mutually recognized in more than 30 countries under the Common Criteria Recognition Agreement (CCRA), and by 17 countries under the SOGIS Mutual Recognition Agreement (MRA) for certifications at EAL4 and above within Europe, CC certification of this IP core can support the needs of our customers across the globe.

Stuart Kincaid, Director Systems Architecture and Certifications at Rambus, receiving the official Common Criteria certificate at the International Common Criteria Conference
Stuart Kincaid, Director Systems Architecture and Certifications at Rambus, receiving the official Common Criteria certificate at the International Common Criteria Conference

The objective behind evaluating and certifying our AES IP core was to enable re-use in a security Integrated Circuit (IC) targeting EAL4+ certification. In this case, the evaluation of the final product can make use of the Evaluation Test Report for Composition (ETRFC) and guidance, saving vital time during product development.

This is the first cryptographic accelerator soft IP core to undergo CC evaluation in this manner, but, as the complexity of security products increases and developers turn to experts in security IP solutions to provide critical functions for their products, it is expected that it will be the first of many to come.

Expectations that IP meet the highest security standards are growing rapidly, and as companies face product development costs and time pressure, choosing an IP that has already been evaluated can reduce overall project risk.

Rambus believes soft IP certification can bring tremendous benefits to its customers by speeding up subsequent evaluations of high assurance products, thereby greatly reducing time-to-market and security evaluation costs.

Find out more about Rambus certified security IP solutions.

]]>
https://www.rambus.com/blogs/rambus-aes-32-cryptographic-accelerator-ip-core-is-common-criteria-certified/feed/ 0
AI Hardware Summit Event Recap: Interview with Steven Woo https://www.rambus.com/blogs/ai-hardware-summit-event-recap-interview-with-steven-woo/ https://www.rambus.com/blogs/ai-hardware-summit-event-recap-interview-with-steven-woo/#respond Tue, 27 Sep 2022 20:11:03 +0000 https://www.rambus.com/?post_type=blogs&p=61997 The fifth annual AI Hardware Summit was back this month, and for the first time in a couple of years, it took place fully in-person in Santa Clara, California. The world’s leading experts in AI hardware came together over the course of three days to discuss some of the big challenges facing the industry, and amongst them was Rambus Fellow, Steven Woo.

We caught up with Steven to find out all about the event and learn more about the panel discussion he led on one of the primary challenges for AI hardware and systems, the AI memory bottleneck.

Question: How would you describe the AI Hardware Summit 2022 to someone who was not there?

Steven: AI Hardware Summit is focused on AI and Machine Learning at the systems level, and brings together chip, system architecture, and software experts to discuss the biggest challenges in AI hardware. The conference includes talks, panels, workshops, exhibits, and networking sessions that give speakers and participants a chance to interact and share their thoughts on challenges and solutions for developing better AI hardware and systems in the future. I led a panel this year that discussed one of the primary challenges for AI hardware and systems, the AI memory bottleneck.

Question: What were some of your key takeaways from the AI Hardware Summit this year?

Steven: AI hardware and software have really grown in popularity over the last 10 years, and over that time we’ve seen a growing range of use cases across the industry. One of the bigger challenges is how to develop hardware and software that addresses this wide range of workloads. Hardware is expensive and time-consuming to develop, and while targeting hardware to a specific workload will give the best results, it’s just not practical to have many individual hardware designs. Reducing design costs and improving hardware flexibility (that allows many workloads to be addressed by the same hardware) is growing in importance, and software is playing an ever-increasing role in helping to make hardware and systems more flexible. Another important challenge is providing better memory and memory systems for AI hardware. Memory and memory systems are a bottleneck in AI hardware, often limiting the speed at which models can be trained and processed. AI models are growing at a rate that’s faster than traditional technologies can keep up – the largest models now have trillions of parameters, requiring larger memory capacities to store them, and higher memory bandwidths to move models, intermediate results, and training data between memory and AI processors.

Question: What AI developments are you personally most excited about seeing in the years to come?

Steven: AI hardware is challenging to use by itself, but software has really helped to democratize access to AI processing. The industry has done a good job with tools, libraries, and infrastructure that abstracts away some of the unique details of each hardware implementation, allowing users to focus on algorithms that are automatically translated to efficient code on the hardware. There’s more work to do in this area, but as the industry matures broader access will become available, opening up AI to an even larger base of users in the future. And while many of the techniques behind current AI hardware have been around for decades, at the time they weren’t practical to implement. Technology advances like better silicon manufacturing and higher performance memory have made them practical now, and this has led to tremendous advances in new algorithms and domain-specific architectures. Transformers are a great example for natural language processing, it’s something that wasn’t possible years ago, and has only come to fruition because of advanced hardware and larger training sets that enable better algorithm development.

Question: What are the key things that were discussed in your panel to get around the AI memory bottleneck?

Steven: I was joined on the panel by Sumti Jairath (Chief Architect at SambaNova Systems), Matt Fyles (SVP Software at Graphcore), and Euicheol Lim (Fellow at SK hynix), and we talked about the importance of memory from several different angles. Memory and memory systems are a key bottleneck in AI hardware and systems today and will continue to be a bottleneck in the future. Flexible AI hardware and systems need a flexible memory solution that can enable different memory capacity and bandwidths so that resources can be dynamically tailored to meet the needs of workloads being processed. CXL offers a great solution for enabling flexibility that allows memory bandwidth and capacity to be scaled as needed by the infrastructure and AI workloads and offers further benefits by enabling memory disaggregation. In terms of the memory components themselves, roughly 2/3 of the power to access memory and move data back and forth to an AI processor is spent simply moving the data, with the rest of the power being used to access data in the DRAM core. Because minimizing data movement has important benefits for system power and performance, Processing-in-Memory (PIM) is seeing increasing interest in the industry, not only for AI but in other areas as well. PIM offloads some of the most important and most common processing functions directly into the memory device, minimizing data movement and reducing power while increasing performance. Power will remain an important challenge going forward, and any power that can be saved in external components like memory can in turn be used to make processing better. Turning power savings into both a hardware problem and a software problem will help improve power-efficiency in the future. Techniques like reduced precision, sparsity, and compression – all of which trade off accuracy for performance and power-efficiency – have been in use for long enough now that software developers understand these tradeoffs and can make appropriate choices to improve power consumption. Although the current era of AI has been going on for about a decade, in many ways we’re still in the early days of this next phase, and we look forward to future developments in this field.

]]>
https://www.rambus.com/blogs/ai-hardware-summit-event-recap-interview-with-steven-woo/feed/ 0
Rambus HQ Is Common Criteria Certified https://www.rambus.com/blogs/rambus-hq-is-common-criteria-certified/ https://www.rambus.com/blogs/rambus-hq-is-common-criteria-certified/#respond Thu, 15 Sep 2022 20:30:03 +0000 https://www.rambus.com/?post_type=blogs&p=61948 As a leading provider of security IP, Rambus invests time and effort in certification, and we are pleased to announce that Rambus headquarters in San Jose, California has been Common Criteria certified by TÜV Rheinland. Read on to find out more about Common Criteria and the benefits this certification brings to Rambus security IP customers!

What is Common Criteria certification?

The Common Criteria for Information Technology Security Evaluation, known as Common Criteria or CC, is an international standard (ISO/IEC 15408) for computer security. Common Criteria provides an objective evaluation that validates whether a product or site satisfies a defined set of security requirements.

Common Criteria operates using Evaluation Assurance Levels or EAL ranging from EAL1 to EAL7. EAL4 to EAL7 are the highest levels of certification. However, it is important to keep in mind that a higher level of CC evaluation does not mean a higher level of security, only that a product or site went through more tests. The responsibility for final product certification remains with the manufacturer of a product.

Why is certification important?

As more and more of our daily activities take place online using devices that collect and exchange our most valuable personal data, we rely on products meeting high security standards.

Certification is a key part of security as it provides evidence that a product meets or has achieved compliance with specific standards developed, reviewed, and maintained by independent organizations.

Stuart Kincaid, Director Systems Architecture and Certifications at Rambus, recently gave a presentation at the Rambus Design Summit which highlighted the importance of certifying security solutions to meet the increasing demand for trust and the many benefits that certification brings to the entire ecosystem.

What were the results of the Rambus HQ Common Criteria evaluation?

Rambus headquarters in San Jose, California has successfully completed the Common Criteria Security evaluation as certified by TÜV Rheinland [link to PDF Rambus_HQ_Common_Criteria_Certificate.pdf]. The evaluation provides evidence that the site meets the EAL4+ assurance level for Life Cycle Support (ALC_CMC.4, ALC_CMS.4, ALC_DVS.2 at AVA_VAN.5 level, ALC_LCD.1, ALC_DEL.1, and ALC_TAT.1).

The Site Technical Audit Report (STAR) contains information necessary to an evaluation lab and certification body for the reuse of the site audit report in a Target of Evaluation (TOE) and demonstrates that Rambus develops, tests, and produces its hardware IP for use in secure IC hardware products.

How does this certification benefit Rambus security IP customers?

There are many customer benefits to the Rambus HQ CC certification. The fact that Rambus HQ is CC certified means that our customers do not need to separately audit the Rambus development facility. This saves valuable time during the product development process and greatly simplifies the certification process for their end products.

Find out more about Rambus certified security IP solutions.

]]>
https://www.rambus.com/blogs/rambus-hq-is-common-criteria-certified/feed/ 0
Rambus Design Summit Interview Series: Steven Woo https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/ https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/#respond Mon, 18 Jul 2022 17:54:06 +0000 https://www.rambus.com/?post_type=blogs&p=61715 Rambus Fellow, Steven Woo, returns to the Rambus Design Summit stage tomorrow, and we are so excited for his keynote: Advancing Computing in the Accelerator Age! In our last interview before the show, we met with Steven to chat about his background, CXL, and some of the biggest challenges for computing in the years ahead.

Read on for Steven’s full interview and don’t forget to register for Rambus Design Summit, happening tomorrow!

Register for Rambus Design Summit!

Question: Can you tell us a bit about your background?
Steven: My background is in computer architecture, and I’ve done research work in multiprocessor architectures, parallel programming, and neural networks. I’ve always been interested in improving the performance of computer systems, and memory systems are critical to faster computing. I’ve led and worked on several projects here at Rambus pushing DRAM and memory performance in PCs and servers, domain-specific architectures for applications like machine learning, and advanced architectures for near-data processing.

Question: What are you working on at Rambus these days?
Steven:I’m currently working in Rambus Labs, the research organization within Rambus, where I lead a team of senior architects chartered with developing innovations for future DRAMs and memory systems. We get to work on longer-term research projects as well as with our business units on nearer-term programs. There are a lot of interesting challenges for future memory systems, and we’re working on solutions that apply to data centers, mobile computing, and high-performance systems.>

Question: CXL is such an exciting emerging technology – how do you see that impacting the future of data center architecture?
Steven: CXL is one of the most disruptive technologies that’s happened over the last 20 years. It will support emerging datacenter usage models by providing a cache-coherent interconnect for processors and accelerators, we well as memory expansion for applications that process large amounts of data. CXL will ultimately enable higher performance and improved resource sharing, reducing overall cost of ownership.

Question: What do you think are the biggest challenges for computing in the years ahead?
Steven: As the world’s digital data continues to increase, new innovations are needed so that processing can keep up.  With performance increasingly limited by data movement, the industry must focus on faster and more power-efficient interconnects and memory systems. Applications and usage models are changing, so system architectures must continue to evolve as well. Accelerators offer new ways to process data more quickly, and resource disaggregation enables higher resource utilization and improved cost of ownership that will influence the direction of computing architectures in the coming years.

]]>
https://www.rambus.com/blogs/rambus-design-summit-interview-series-steven-woo/feed/ 0
Rambus Design Summit Interview Series: Ann Keffer https://www.rambus.com/blogs/rambus-design-summit-interview-series-ann-keffer/ https://www.rambus.com/blogs/rambus-design-summit-interview-series-ann-keffer/#respond Mon, 11 Jul 2022 19:58:20 +0000 https://www.rambus.com/?post_type=blogs&p=61687 We’re so excited that Ann Keffer, Product Marketing Manager at Siemens EDA, will be joining us on the (virtual) stage at Rambus Design Summit!

Ahead of the show, we talked to Ann about autonomous driving, what she loves to do in her free time, and growth drivers for the Siemens EDA business. Read on for the full interview below, and join us next week to see Ann’s presentation as well as other sessions covering chip and IP solutions for the data center, edge, automotive and IoT devices including the acceleration and security of AI/ML applications!

Register for Rambus Design Summit!

Question: Tell us a bit about yourself and your career path. 

Ann: I started my career at Hewlett Packard working in R&D as a software developer for their proprietary operating system after graduating with a BS in computer science and math. I moved into management and held management positions in several business units. In 2012 I was recruited to Galil Motion Control, who manufactures motion controllers and PLCs,  as head of Product Management and Marketing reporting to the President. In 2014 I joined the robot manufacturer Adept Technology, now part of Omron, as Director of Product Management and Marketing reporting to the CSO where we launch Adepts first autonomous robot. In 2016 I was hired by Cadence Design Systems as a Marketing Director in their IP group and after a year moved to the verification group as Director of Product Management for functional safety. In 2019 I joined Siemens to work in product management for a newly acquired copy called Austemper who developed tools for functional safety verification which is where I am now!

For fun I like to run and cycle and my hobbies include sculpting and drawing!

Question: What do you see as the big growth drivers for the Siemens EDA business? 

Ann: The automotive market is a big one for sure. But big growth areas are AI/ML, cloud, 5G are other big growth areas.

Question: What’s the biggest challenge your customers face? 

Ann: The complexities of achieving safety and security on large designs targeted for AV/EVs!   It’s a challenging task and it’s a new market that I predict will realize changes that may increase the complexity of achieving safeness. Time-to-certification is definitely a challenge as making chips/SoCs safe and secure can add many months to the development cycle of a chip.

Question: As we move more towards autonomous driving, what impact are you seeing in the design of automotive electronics? 

Ann: The challenge of getting safety and security certified to the specification of all of the standards we talked about in our presentation!

Question: We’re excited to have you at RDS this year! What key takeaways do you want your audience to walk away with? 

Ann: Thank you so much for inviting me, I am honored to be at RDS!! Key takeaway is this:

Addressing the intersection of safety and security is a challenge, and together, Rambus and Siemens made certain the RT-640 achieved the necessary security and safety levels to allow any automotive SoC design using this IP to fulfill it’s required use cases.

]]>
https://www.rambus.com/blogs/rambus-design-summit-interview-series-ann-keffer/feed/ 0