Linley Spring Processor Conference 2021

April 19 - 23, 2021
Virtual Event

» Events  |  Event Info  |  Agenda Overview  |  Day One  |  Day Three  |  Day Four  |  Day Five  |  Late Registration

Agenda for Day Two: Tuesday April 20, 2021
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-9:30amSession 2: Embedded SoC Design

Companies are deploying deterministic compute solutions to a wide range of applications. Real-time processors used to be reserved for lower performance embedded solutions, but the talks in this session burst those misconceptions. This session, moderated by The Linley Group senior analyst Aakash Jani, discusses flexible and scalable embedded processor solutions that deliver strong, reliable, and secure performance in a tightly constrained power and area window.

Enabling Next-Generation Real-Time Workloads
Lidwine Martinot, IoT Solutions Manager, Arm and Neil Werdmuller, Director of Storage Solutions, Automotive and IoT, Arm

Real-time processors deliver fast and deterministic processing while meeting challenging time constraints, and power a range of applications such as storage controllers and industrial automation. New capabilities such as 64-bit addressing, and new instructions targeting ML workloads can create innovation in existing markets and applications as well as addressing new ones. This presentation explores how adding new capabilities to a real-time processor can spark innovation in a range of applications. 

Meeting Increasing Processor Performance Requirements in High-End Embedded Applications
Paul Stravers, Principal R&D Engineer, Synopsys

Performance requirements in embedded processing are rapidly increasing as high-performance infrastructure spreads from the cloud to end points of the internet. At the same time, power and area constraints in these applications restrict what can be done to increase processor performance, driving the use of configurable multicore solutions and specialized hardware accelerators. This presentation will discuss an embedded processor architecture that can deliver the performance, scalability and flexibility needed to address the ever-increasing performance requirements for a variety of high-end embedded applications.

For this session, each talk will have 10 minutes of Q&A immediately following.

9:30am-9:40amBreak Sponsored by Arm
9:40am-11:40amSession 3: Scaling AI Training

As AI models continue to grow in size, training them becomes onerous. Several vendors are developing accelerator chips specifically for this task, focusing on scalability and efficiency. This session, led by The Linley Group principal analyst Linley Gwennap, will discuss how vendors are addressing these challenges to satisfy data-center customers.

AI Training in the Data Center with Habana Gaudi-Based Amazon EC2 Instances
Eitan Medina, Chief Business Officer, Intel

In December 2020, AWS announced Gaudi-based Amazon EC2 instances would be launched in the first half of 2021. In the announcement, AWS stated that “The new EC2 instances will leverage up to 8 Gaudi accelerators and deliver up to 40% better price performance than current GPU-based EC2 instances for training deep learning models.”  These EC2 instances address deep-learning training workloads in applications such as image classification, object detection, natural language processing, recommendation and personalization. This talk will discuss the fundamental hardware and software that enables end users to leverage Gaudi in the public cloud as well as in on-premise deployments

Scale-Out-First Microarchitecture for Efficient AI Training
Drago Ignjatovic, VP of Silicon Engineering, Tenstorrent and Davor Capalija, Fellow, Tenstorrent

The rapidly growing compute demands and complexity of training AI models necessitate the creation of a new computing architecture that can efficiently enable the unprecedented scale-out to 10s and even 100s of thousands of devices. While many vendors focused primarily on layering the system architecture on top of a “fundamentally single-device architecture” to address the scale-out challenges, Tenstorrent has designed every aspect of its microarchitecture from the ground up for scale-out to a virtually unlimited number of devices. This presentation will introduce our AI Training architecture and its embodiment in a 12nm Wormhole device, and outline the key features of our scale-out-first microarchitecture.

Supporting Scalable Machine Intelligence Systems with the Poplar SDK
Matt Fyles, Senior Vice President Software, Graphcore

Graphcore’s co-designed, tightly integrated software and hardware systems are optimized from the ground-up for scalable machine intelligence performance and developer usability. This talk will outline how Graphcore’s Poplar SDK enables current and next generation AI models to execute easily and efficiently on second generation IPU systems.

Wafer Scale at 7nm: The Cerebras CS-1
Sean Lie, Chief Hardware Architect and Cofounder, Cerebras Systems

Cerebras Systems has developed and delivered to customers worldwide the fastest AI compute system. The Cerebras CS-1 contains the Wafer Scale Engine (WSE), the largest chip ever made -- 56 times larger than the competition, with 1.2T transistors, 400,000 AI optimized cores and 18 Gigabytes of high-speed on-chip memory. WSE-2 more than doubles this performance with 2.6T transistors and 850,000 AI optimized cores. In this talk, we’ll discuss customer use cases that are impossible for legacy technologies and announce WSE-2 and CS-2.

For this session, each talk will have 10 minutes of Q&A immediately following.

11:40am-12:40pmBreakout sessions with today's speakers

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Industry Sponsor