Linley Spring Processor Conference 2021

April 19 - 23, 2021
Proceedings Available

» Events  |  Event Info  |  Agenda Overview  |  Day One  |  Day Two  |  Day Four  |  Day Five  |  Proceedings & Event Videos

Agenda for Day Three: Wednesday April 21, 2021
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-10:00amSession 4: AI SoC Design

When designing AI/ML SoCs, new design constraints arise. These SoCs work their way into a myriad of solutions where safety, voltage range, and memory bandwidth become paramount concerns. This session, moderated by The Linley Group senior analyst Aakash Jani, discusses clock distribution networks, ISO-compliant automotive SoC architectures, and modern memory solutions for AI/ML SoCs, which will ease pain points for these heterogenous silicon solutions.
 

An Intelligent Clock Distribution Network for AI and Heterogenous SOCs
Mo Faisal, President and CEO, Movellus

Modern SoC designs, especially AI-enabled applications, have complex clocking requirements. SoC-level clock distribution challenges such as on-chip variation, jitter, clock skews, peak current, and switching noise have increased with advanced process scaling. We present a platform solution that intelligently orchestrates timing across an SoC using all-digital, fully synthesizable components to achieve reduced power consumption across a wide voltage range resulting in up to 10X smaller area and low jitter.

Architecting Automotive SoCs with AI/ML and Functional Safety Capabilities
Stefano Lorenzini, Functional Safety Manager, Arteris IP

We describe lessons learned using network-on-chip (NoC) interconnect technology to create ISO 26262-compliant automotive SoC architectures performing near real time AI/ML inference processing. As ADAS and autonomous driving systems become more complex, the number and complexity of hardware accelerators implementing AI/ML processing within the SoC “brains” of these systems is increasing. Architecting these systems to be ISO 26262-compliant is a technical and operational challenge because low latency and high bandwidth requirements can conflict with functional safety goals.

Advancing Memory Solutions for AI/ML Training
Frank Ferro, Senior Director of Product Management, Rambus

Exponential data growth is driving performance requirements in data centers and networking infrastructure. The rapid rise of AI/ML contributes greatly to the enormous growth in data which the industry has responded to with new hardware platforms developed for training applications. But even these new designs are increasingly limited by memory bandwidth, thus underutilizing the available processing power. This presentation will explore how HBM2E memory subsystems can address the growing AI/ML memory bottleneck.

For this session, each talk will have 10 minutes of Q&A immediately following.

10:00am-10:10amBreak Sponsored by Intel
10:10am-11:40amSession 5: Network Infrastructure for AI and 5G

From AI clusters to 5G vRAN, the need for low latency and greater bandwidth is driving new network architectures. This session, led by The Linley Group principal analyst Bob Wheeler, discusses how accelerators, processors, and interconnects are meeting these new infrastructure requirements.

The Industry’s First Structured ASIC (SASIC) for 5G, AI, and the Edge Explosion
Massimo Verita, Senior Director, Intel

Intel’s structured ASIC technology (eASIC) sits in between FPGAs and ASICs offering power, performance, and unit-price advantages over FPGAs and reduced time to market and lower NRE costs than ASICs.  Intel’s latest eASIC N5X devices use embedded microprocessors, logic and memory arrays, and I/O and transceivers configured using via-masks to create a final product.  This talk will cover three topics enabling eASIC technology to address a wide range of use cases that benefit from reduced power and increased performance with advanced security.

Silicon Photonics Solutions Address Bandwidth, Reach, and Power Challenges
Anthony Yu, VP Computing and Wired Infrastructure, GlobalFoundries

As traditional chip scaling becomes less practical, the future of silicon photonics (SiPh) shines bright. SiPh's bandwidth, reach, and power advantages are widely adopted for pluggable optics and now extend to ML/AI computing systems through co-packaged optics (CPO).  CPO offers cost and power savings in ASIC, packaging, multi-wavelength light sources, assembly, and photonics. This presentation will explore the broader adoption of SiPh through an approach of integrating complex CMOS and optical component functionality onto a single photonic integrated circuit (PIC).

The Fusion of Network and Baseband Processors in the 5G Fronthaul
Yaniv Kopelman, Vice President and Switch CTO, Marvell and Hongjik Kim, Senior Director, Wireless Modem Engineering, Marvell

The emergence of Open-RAN and disaggregation of RAN are driving new fronthaul network architectures in 5G infrastructure. In this presentation, we discuss the combination of specialized network switches with baseband processors to meet fronthaul latency, throughput, and processing requirements. We will evaluate the new hybrid xHaul network model and present how various radio access nodes – together with the fusion of signal processing, packet processing and switch functionality – create a flexible and scalable 5G network.

For this session, each talk will have 10 minutes of Q&A immediately following.

11:40am-12:40pmBreakout sessions with today's speakers

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Industry Sponsor