Linley Fall Processor Conference 2020
Held October 20-22 and 27-29, 2020
Proceedings available
» Events | Event Info | Agenda Overview | Day One | Day Two | Day Three | Day Four | Day Six | Proceedings & Event Videos
Agenda for Day Five: Wednesday October 28, 2020
View Day One
8:30am-10:00am | Session 7: AI in Edge Devices (Part II)
As AI services move from the service provider into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors target lower performance than cloud accelerators but must meet the strict cost and power requirements of systems such as consumer and industrial IoT devices, surveillance and retail cameras, and even mobile devices. This session, moderated by The Linley Group senior analyst Mike Demler, examines a range of chips and IP cores that accelerate AI inference in various edge devices.
This presentation will introduce a half-height, half-length PCIe board for low-cost low-power edge-inference servers. Using the InferX X1 chip, the board delivers acceleration comparable to a GPU-based solution at much lower cost and power consumption. Building a complete solution requires a comprehensive software suite. We will describe the compiler, APIs, and run-time environment, along with our 2021 roadmap for two more boards targeting servers and embedded systems.
This presentation will describe how to use a machine-learning SDK to deliver leading-edge performance on networks such as ResNet-50, VGG, and Inception_v2. We’ll show how running these benchmarks on an emulator platform can achieve high correlation with the simulation environment. Finally, this talk will describe SiMa.ai’s MLSoC edge-AI processor, which delivers 50 TOPS at just 5 Watts, and how it can handle a multitude of frameworks across applications like surveillance and robotics with the highest level of accuracy.
Due to rapid progress in neural network research and varying processing requirements, programmable solutions are essential. Processors deployed in high-volume AI-enabled end products, such as smart speakers, mobile phones, surveillance cameras, and automotive subsystems, meet the application needs by distributing processing across low-power, programmable DSP and AI accelerators. This presentation highlights key trends in DNN topologies and software tools that support a wide range of edge-AI processing systems, from low-cost voice-activated consumer devices to high-throughput autonomous-vehicle perception.
For this session, each talk will have 10 minutes of Q&A immediately following. |
10:00am-10:10am | Break Sponsored by Flex Logix |
10:10am-11:40am | Session 8: Heterogeneous Computing
As the industry embraces heterogeneous computing, vendors are delivering a new generation of application-optimized silicon. These new chips drive both throughput and efficiency, separating them from general-purpose processors. Led by The Linley Group principal analyst Bob Wheeler, this session examines three recent examples that illustrate this trend.
Modern applications need high performance, scale-out "hyperdisaggregated" data centers, where servers interact as a unified pool of disaggregated compute and storage servers to serve application needs. This presentation will discuss two of the biggest challenges in scale-out data centers – inefficient interchange of data and inefficient execution of data-centric computations. It will also include a deep dive into the solution - the Fungible DPU - a programmable microprocessor that addresses these inefficiencies while strengthening data center reliability, security and agility.
The emerging edge opportunity is similar to the last decade’s growth in cloud infrastructure. The factors driving infrastructure transformation at the edge make a strong case for the fourth FPGA wave, in which FPGAs become ubiquitous programmable building blocks. 5G and IoT will drive demand for massive data bandwidth and computing at the source of the data. COVID-19 is accelerating digital transformation and the need for efficient edge computing. This presentation will discuss these technology trends, challenges and solutions.
Infrastructure processors, a combination of compute, networking, and programmable accelerators, are required across the entire network. This presentation explores these end-to-end networking requirements and discloses how the architecture of Octeon CN98xx DPU was designed to meet them. Architecture disclosures include innovations in caching, hardware scheduling, virtualization provisioning, traffic management, inline cryptography, regular expression hardware, and more.
For this session, each talk will have 10 minutes of Q&A immediately following. |
11:40am-12:40pm | Breakout sessions with today's speakers |
1:30pm-3:30pm | Speaker 1:1 Meetings |