Linley Fall Processor Conference 2020

Held October 20-22 and 27-29, 2020
Proceedings available

» Events  |  Event Info  |  Agenda Overview  |  Day One  |  Day Two  |  Day Three  |  Day Four  |  Day Five  |  Proceedings & Event Videos

Agenda for Day Six: Thursday October 29, 2020
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
8:30am-10:00amSession 9: SoC Design

SoC and processor design is increasing in complexity. Almost every SoC in production uses delivered IP whether it comes from external or internal sources. Due to the rapidly moving pace of architecture releases, software developers need low cost and cohesive solutions to create the corresponding software. This session, moderated by The Linley Group senior analyst Aakash Jani, discusses a RISC-V development board which will aid in Linux software development and two licensable IP solutions which will help mitigate the strain of integrating IP.

Creating a RISC-V PC Ecosystem for Linux Application Development
Yunsup Lee, CTO, SiFive

New processor architectures require access to development environments to create and optimize software. This presentation describes a next-generation SoC that enables professional developers to create RISC-V applications from bare-metal to Linux-based, including porting of existing applications. The Freedom U740 next-generation SoC combines a heterogeneous mix-and-match core complex with modern PC expansion capabilities and form factor with a suite of included tools to facilitate broad professional software development.

Mobile CPUs Unlock the Novel Devices of Tomorrow
Stefan Rosinger, Director of Product Marketing, Arm and Jinson Koppanalil, Distinguished Engineer, Arm

Mobile computing continues to evolve rapidly. New use cases and form factors are driving aggressive and seemingly contradictory design points. This poses a challenge for companies as they build innovative silicon for tomorrow’s devices. This presentation will provide an overview of how the latest Arm CPUs are unlocking novel, flexible, and holistic solutions. These CPUs help deliver high-performance, efficient, and cross-platform heterogeneous compute without compromising on security or time-to-market.

A Flexible Multiprotocol Cache Coherent Network-on-Chip (NoC) for Heterogeneous SoCs
Michael Frank, Fellow and Chief Architect, Arteris IP

As AI and ML drive chip complexity, heterogeneous architectures using multiple types of processing elements are becoming a practical solution to meet processing and power requirements in systems-on-chip (SoC). This presentation describes new technology that allows processors developed with AMBA CHI, ACE and AXI interfaces to be integrated together in a single cache-coherent system. The talk includes graphical examples that highlight topology and network node configuration flexibility and the use of integrated system-level simulation to determine optimal architectures.

For this session, each talk will have 10 minutes of Q&A immediately following.

9:30am-9:40amBreak Sponsored by The Linley Group
10:10am-11:40amSession 10: In-Memory Compute

In-memory compute promises the next big breakthrough in performance per watt. For many applications, the power required to move data between the memory to the compute unit exceeds the power of the computation. Performing computation in the memory itself greatly reduces this data movement, providing order-of-magnitude efficiency gains on AI and big-data applications. This session, moderated by The Linley Group principal analyst Linley Gwennap, explains how these new architectures work, what applications they accelerate, and the efficiency gains they achieve.

An Associative Processing Structure Challenges Von Neumann Architecture
Bob Haig, Director of Product Management, GSI Technology and Dan Ilan, Hardware Architect, GSI Technology

Standard CPUs, with tens to hundreds of cores and ALUs, can perform complex calculations on small datasets very quickly, but they are less efficient when handling large datasets due to the memory bottleneck imposed by traditional von Neumann architecture.  When a task requires performing parallel calculations across a large dataset, a higher-performance and more energy-efficient “Associative Processing Unit” can be designed as an inter-connected array of millions of simple “boolean bit processors” utilizing in- and near- memory processing techniques.

At-Memory Computation: A Transformative Compute Architecture for Inference Acceleration
Robert Beachler, VP of Product, Untether AI

Traditional processor architectures are failing to keep up with the exploding compute demands of AI workloads. They are limited by the power-hungry weight-fetch of von Neumann architectures and limitations of transistor and frequency scaling. At-memory computation places compute elements directly in the memory array, providing reduced power consumption and increased throughput due to the massive parallelism and bandwidth provided by the architecture. This presentation introduces a new class of non-von Neumann compute designed to meet these AI demands.

A General-Purpose AI Processor for Micro Edge Applications
GP Singh, CEO, Ambient Scientific

Micro edge applications present severe constraints in power consumption and cost. Architectures based solely on off-the-shelf components cannot meet these requirements.  New innovations are required, all the way down to circuits and devices. This presentation discusses the implementation of the GPX-10, a general-purpose AI processor, and its DigAn technology platform, which uses custom circuit components to deliver high-performance AI inference and retraining within the power and cost requirements for micro edge devices.

There will be Q&A and a panel discussion featuring the above speakers.

11:40am-12:40pmBreakout sessions with today's speakers
1:30pm-3:30pmSpeaker 1:1 Meetings
3:30pmEnd of Conference


Premier Sponsor

Platinum Sponsor

Gold Sponsor

Andes Technologies

GSI Technology

Industry Sponsor

Media Sponsor