Linley Spring Processor Conference 2022

Conference Dates: April 20-21, 2022
Hyatt Regency Hotel, Santa Clara, CA

» Events  |  Event Info  |  Day Two  |  Proceedings

Agenda for Day One: Wednesday April 20, 2022
View Day Two

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-10:00amKeynote:

AI Moves From Cloud to Edge
Linley Gwennap, Principal Analyst, TechInsights

For chip vendors, AI has evolved from a product to a feature, with AI acceleration appearing in servers, PCs, smartphones, and a vast array of edge devices. Architecture innovation is particularly rampant at the edge, where power limitations are stringent and software stacks are smaller. Edge chips target a range of applications from battery-powered sensors to powerful camera-based systems. This presentation will describe the latest architecture trends for AI accelerators of all types.

There will be Q&A following this presentation.

10:00am-10:20amBREAK – Sponsored by Intel
10:20am-12:20pmSession 1: Edge-AI Design

As AI applications move from cloud platforms into edge devices, processor and system designers are increasingly including hardware accelerators for this important function. These accelerators must meet lower cost and power requirements while still meeting rising performance needs of consumer and commercial applications. This session, moderated by TechInsights senior analyst Bryon Moyer, examines accelerator IP and software for edge-AI inference.

Bigger, Faster, and Better AI Using Synopsys NPUs
Pierre Paulin, Director of R&D, Embedded Vision, Synopsys

AI applications are driving the need for more efficient Neural Network processing, but the trick is getting GPU-level performance within an embedded power and area budget. This session covers Synopsys’ new Neural Processing Units (NPUs) based on a novel architecture and trusted software tools, which significantly improve MAC utilization, support the latest neural-network techniques and scale from battery powered devices to L3+ autonomous driving.

Energy-Optimized On-Device AI Processing
David Bell, Product Marketing Manager for Tensilica AI Products, Cadence

On-device AI processing is becoming pervasive across embedded devices, such as hearables and wearables, TWS, smart sensors, and smart appliances, where the right level of processing at low energy consumption is critical for longer battery life. This presentation will discuss the first AI Boost product from Cadence, the Tensilica NNE110 neural network engine. Coupled with the already efficient Tensilica DSPs, along with software tools and support for a range of software frameworks, the NNE110 provides ideal AI solutions for low-energy, battery-powered applications.

Highly-Scalable Heterogeneous and Secure Architecture for AI/ML in Smart Edge Devices
Gil Abraham, Business Development Director, Vision Business Unit, CEVA

AI processors have become an integral part of many products. This represents a growing challenge as the demand for ML computational power increases, while AI processing keeps moving to the power-constrained edge. In this presentation we will discuss NeuPro-M, CEVA’s latest AI processor architecture and the way it handles recent AI processing challenges such as staying current in a dynamic technology environment and providing a full SDK to reduce risk and design time. By examining the NeuPro-M heterogeneous mechanisms, we’ll show its exceptional ML efficiency, high TOPS/Watt, and scalability to meet diverse use cases in a range of end markets.

Meeting the Real Challenges of AI
Randy Allen, Vice President of Software, Flex Logix

Machine Learning was first described in its current form in 1952. Its recent re-emergence is not the result of technical breakthroughs, but instead of available computation power.  The ubiquity of ML, however, will be determined by the number of computational cycles we can productively apply subject to the constraints of latency, power, area, and cost.  That has proven to be a difficult challenge.  This talk will discuss approaches to creating parallel heterogeneous processing systems that can meet the challenge.

There will be Q&A and a panel discussion featuring above speakers.

12:20pm-1:35pmLUNCH – Sponsored by Flex Logix
1:35pm-2:40pmSession 2: Low-Power AI Architectures

Several companies are developing ultralow-power AI accelerators to minimize the energy consumption of always-on circuits. These accelerators are often used for speech detection and recognition in battery-operated systems. Some vendors use standard digital approaches, while others are attempting analog computing, which promises order-of-magnitude power reductions. This session, moderated by TechInsights principal analyst Linley Gwennap, examines technologies at the leading edge of low-power design.

New Paradigm in Machine Learning: Inferencing in Analog
David Graham, CSO and Founder, Aspinity

Aspinity has developed a software programmable, fully analog chip that performs machine learning while sensor data are still in their natural analog state, enabling a level of power- and data-efficiency for always-on sensing applications.  In this presentation, we will describe the analog processing technology and demonstrate how analogML can be used to reduce always-on system power by more than 95% and increase battery life by 20x over today’s machine learning solutions that operate within a traditional digital framework.

The Gaussian and Neural Accelerator
Ethan Kalifon, Sr. Hardware Component Engineer, Intel

The Intel Gaussian and Neural Accelerator (GNA) is a low-power co-processor that has been integrated into several Intel processors including the 11thand 12th Generation Intel Core Processors (formerly codenamed Tiger Lake and Alder Lake respectively). It is designed to offload continuous inference workloads including noise reduction and speech recognition, and new use cases continue to emerge. This talk will introduce the GNA architecture, the value proposition of such architecture, and the software needed to employ it.

This session will include Q&A after each presentation.

2:40pm-3:00pmBREAK – Sponsored by Ceremorphic
3:00pm-5:05pmSession 3: Innovations in Compute Architecture

Efficient performance is the new imperative across edge, enterprise, and cloud environments. Compute architectures must adapt, whether to maximize rack density or meet stringent thermal requirements. This session, led by TechInsights principal analyst Bob Wheeler, will examine emerging programmable architectures that boost workload efficiency.

A Linux-Compatible 64-bit High-Performance In-Order RISC-V CPU Core
Zvonimir Bandić, Sr Director Next Generation Platform Technologies, Western Digital

Western Digital recently developed the 64-bit Linux- and Android-capable SweRV EHX3 CPU core, which targets SoCs for Linux systems in use cases from computational storage to smart NICs. This talk will discuss IP architectural details, power, area, performance benchmarks, and application block diagrams.

Accelerating Virtualized Workloads with DPUs
John Sakamoto, Vice President, Infrastructure Processor BU, Marvell

Today, DPUs are widely deployed in cloud infrastructure to offload network, storage, and security functions from the server host. More and more, the use cases for DPUs are expanding to accelerate virtualized implementations of network firewalls and RAN, enabling orders of magnitude more performance than software-only implementations. This session will cover Marvell’s wide range of DPU accelerator cards and Velox software solutions that enable optimal virtualized network, storage, security, and AI performance. 

Delivering Reliable Performance Computing in an Energy-Efficient AI Supercomputing Chip
Venkat Mattela, Founder and CEO, Ceremorphic

Tracking, detecting, and correcting errors in complex chips is a monumental task. Today, only a specific set of applications like automotive have adopted reliable silicon in their system designs. The rapid proliferation of machine learning applications and their inherent high-performance demands, hence larger chips, have made reliability a mainstream issue in semiconductors. Replication of resources, a method adopted for decades, is not a sustainable solution. Ceremorphic will present a new reliability architecture adopted in its 5nm Hierarchical Learning Processor (HLP).

New Tools to Accelerate Neuromorphic Computing
Mike Davies, Director, Neuromorphic Computing Lab, Intel

Despite much progress in neuromorphic computing over the past several years, the field remains at a nascent stage, with challenges facing both the application and scaling of neuromorphic technology in support of AI and edge computing breakthroughs.  Loihi 2, Intel’s second-gen neuromorphic research processor, along with Lava, an open-source software framework, advance the state-of-the-art in this field, promising greater gains compared to Loihi 1, greater accessibility to mainstream developers, and faster progress to commercial impact.

This session will include Q&A after each presentation.

5:05pm-6:30pmReception and Exhibits – Sponsored by Intel

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor