Linley Fall Processor Conference 2020

October 20-22 and 27-29, 2020 (All Times Pacific)
Virtual Event

» Events  |  Event Info  |  Agenda Overview  |  Day One  |  Day Two  |  Day Three  |  Day Four  |  Day Six

Agenda for Day Five: Wednesday October 28, 2020
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-10:00amSession 7: AI in Edge Devices (Part II)

As AI services move from the service provider into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors target lower performance than cloud accelerators but must meet the strict cost and power requirements of systems such as consumer and industrial IoT devices, surveillance and retail cameras, and even mobile devices. This session, moderated by The Linley Group senior analyst Mike Demler, examines a range of chips and IP cores that accelerate AI inference in various edge devices.

A Low-Cost AI-Inference Accelerator PCIe Board Under 20W
Geoff Tate, CEO & Cofounder, Flex Logix

This presentation will introduce a half-height, half-length PCIe board for low-cost low-power edge-inference servers. Using the InferX X1 chip, the board delivers acceleration comparable to a GPU-based solution at much lower cost and power consumption. Building a complete solution requires a comprehensive software suite. We will describe the compiler, APIs, and run-time environment, along with our 2021 roadmap for two more boards targeting servers and embedded systems.

Using a Machine-Learning SDK to Boost Performance/Watt in Edge-AI Systems
Kavitha Prasad, VP of Systems Solutions, SiMa.ai

This presentation will describe how to use a machine-learning SDK to deliver leading-edge performance on networks such as ResNet-50, VGG, and Inception_v2. We’ll show how running these benchmarks on an emulator platform can achieve high correlation with the simulation environment. Finally, this talk will describe SiMa.ai’s MLSoC edge-AI processor, which delivers 50 TOPS at just 5 Watts, and how it can handle a multitude of frameworks across applications like surveillance and robotics with the highest level of accuracy.

Edge-AI Processor IP Solutions for a Broad Market
Pulin Desai, Director Vision and AI Product Marketing, Cadence

Due to rapid progress in neural network research and varying processing requirements, programmable solutions are essential. Processors deployed in high-volume AI-enabled end products, such as smart speakers, mobile phones, surveillance cameras, and automotive subsystems, meet the application needs by distributing processing across low-power, programmable DSP and AI accelerators. This presentation highlights key trends in DNN topologies and software tools that support a wide range of edge-AI processing systems, from low-cost voice-activated consumer devices to high-throughput autonomous-vehicle perception.

For this session, each talk will have 10 minutes of Q&A immediately following.

10:00am-10:10amBreak Sponsored by Flex Logix
10:10am-11:40amSession 8: Heterogeneous Computing

As the industry embraces heterogeneous computing, vendors are delivering a new generation of application-optimized silicon. These new chips drive both throughput and efficiency, separating them from general-purpose processors. Led by The Linley Group principal analyst Bob Wheeler, this session examines three recent examples that illustrate this trend.

A New Microprocessor Class for Data-Centric Computing
Rajan Goyal, CTO, Fungible

Modern applications need high performance, scale-out "hyperdisaggregated" data centers, where servers interact as a unified pool of disaggregated compute and storage servers to serve application needs. This presentation will discuss two of the biggest challenges in scale-out data centers – inefficient interchange of data and inefficient execution of data-centric computations. It will also include a deep dive into the solution - the Fungible DPU - a programmable microprocessor that addresses these inefficiencies while strengthening data center reliability, security and agility.

Massive Edge Computing Opportunity and the Fourth FPGA Wave
Mike Fitton, Sr. Director of Strategy and Planning, Achronix

The emerging edge opportunity is similar to the last decade’s growth in cloud infrastructure. The factors driving infrastructure transformation at the edge make a strong case for the fourth FPGA wave, in which FPGAs become ubiquitous programmable building blocks. 5G and IoT will drive demand for massive data bandwidth and computing at the source of the data. COVID-19 is accelerating digital transformation and the need for efficient edge computing. This presentation will discuss these technology trends, challenges and solutions.

A High-Performance Infrastructure Processor to Support the Network Transformation
Wilson Snyder, Distinguished Architect, Marvell

Infrastructure processors, a combination of compute, networking, and programmable accelerators, are required across the entire network.  This presentation explores these end-to-end networking requirements and discloses how the architecture of Octeon CN98xx DPU was designed to meet them.  Architecture disclosures include innovations in caching, hardware scheduling, virtualization provisioning, traffic management, inline cryptography, regular expression hardware, and more.

For this session, each talk will have 10 minutes of Q&A immediately following.

11:40am-12:40pmBreakout sessions with today's speakers
1:30pm-3:30pmSpeaker 1:1 Meetings

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Andes Technologies

GSI Technology

Industry Sponsor

Media Sponsor