Linley Fall Processor Conference 2021

Coming October 20-21, 2021
Hyatt Regency Hotel, Santa Clara, CA

» Events  |  Event Info  |  Day One  |  Register

Agenda for Day Two: Thursday October 21, 2021
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-9:50amKeynote:

Follow the Smart Money: VC Perspectives on Emerging Tech
Kushagra Vaid, Partner, Eclipse Ventures

Venture capital investments are at an all-time high, with an increasing amount of funding going towards semiconductor and hardware startups. This keynote presentation will dive into current market trends and investment themes, providing insights into various hardware categories where venture dollars are being allocated. We will then cover key emerging technologies that are poised to disrupt the status quo, driving new future computing models from cloud through edge.

There will be Q&A following this presentation.

9:50am-10:10amBREAK – Sponsored by Arm
10:10am-12:15pmSession 7: Edge-AI Processing

As AI applications move from cloud platforms into edge devices, processor designers are increasingly including hardware accelerators for this important function. These processors must meet lower cost and power requirements while still meeting rising performance needs of consumer and commercial applications. This session, moderated by The Linley Group principal analyst Linley Gwennap, examines a range of chips and IP cores that accelerate edge-AI inference and training.
 

Exploring Workload Performance in Heterogeneous Edge AI SoCs
Sharad Chole, Chief Scientist, Cofounder, Expedera

Heterogeneous Edge AI SoCs are struggling to meet application requirements because of low utilization, saturated bandwidth, power constraints, and sub-par NN accuracy. This is due to limited visibility into workload performance assessment. Expedera’s cycle-accurate Estimator and precision-accurate Functional Model enable SoC architects to explore architectural decisions concerning quantization, bandwidth, and power. Expedera’s deterministic Neural Engine paired with Origin Software Platform integrates with all major frameworks through TVM/TFLite, and allows developers to maximize accuracy and TOPS per watt.

Achieving High Compute Density and Software Programmability On the Edge
Martin Hunt, Director Applications Engineering, Coherent Logix

As the complexity of multi-functional and multi-modal edge applications increases, the demand for highly computationally efficient and flexible software-based processing is reaching new levels. Coherent Logix is announcing its fourth generation HyperX hx40416 processor to address this demand with its Memory Network architecture based scalable computing fabric. The processor targets edge applications with high bandwidth and multi-modal sensor input, low thermal and power budget, low actionable latency, and complete software programmability. We will highlight the architectural features of this processor.
 

Delivering Leadership Performance and Efficiency for Edge Applications
Mike Vildibill, Vice President, Product Management, Qualcomm

The Cloud AI 100 accelerator offers leadership class performance and power efficiency across many applications ranging from datacenter to edge deployment. In this talk we will discuss Qualcomm’s comprehensive offering of commercial software and hardware tools for integration and deployment in production settings. The presentation will dive into Foxconn´s Gloria AI Edge Box, our new joint announcement, featuring a turnkey commercial device powered by Qualcomm Cloud AI 100 with Snapdragon running on Linux and 5G.
 

INT8 AI Training Everywhere
Moshe Mishali, CTO and Cofounder, Deep AI

Floating-point operations (FLOPs) are today’s vehicle for training AI models. We discuss the impact of low-precision training technology on the entire AI ecosystem. 8-bit integer (INT8) operations shorten training times and reduce data-center bandwidth by more than an order of magnitude. Furthermore, INT8 training technology ignites a ground-breaking paradigm shift. Low-power edge devices can now perform retraining cycles completely locally, i.e., without sending data to the cloud, and mobile devices can ultimately run personalized AI applications.

There will be Q&A and a panel discussion featuring above speakers.

12:15pm-1:45pmLUNCH – Sponsored by Flex Logix
1:45pm-3:45pmSession 8: High-Performance Processor Design

Vendors seek to improve performance at various levels of a product’s architecture. This session, moderated by The Linley Group senior analyst Aakash Jani, explores how companies lift performance through their data fabrics, microarchitecture, and software. Additionally, we will see their vision for further improvements in high-performance processor design.

Alder Lake Performance Hybrid Architecture
Rajshree Chabukswar, Senior Principal Engineer, Client Computing SoCs, Intel

Reinventing the multicore architecture, Alder Lake will be Intel’s first performance hybrid architecture featuring a combination of Performance cores and Efficient cores along with the new Intel Thread Director. Intel Thread Director delivers a unique approach to thread scheduling and ensures Efficient cores and Performance cores work seamlessly together by dynamically and intelligently assigning workloads for maximum real-world performance. Alder Lake is Intel’s next-generation client SoC architecture, scaling from ultra-mobile to desktop and brings multiple industry-leading I/O and memory technologies to market. 

RISC-V at Scale: An Architecture for the Future of Computing
Shubu Mukherjee, Distinguished Engineer, SiFive

The next wave of RISC-V adoption will occur at the bleeding edge, where raw performance is paramount. Recent advances in microarchitecture, including multi-core and multi-cluster topologies, will accelerate RISC-V adoption in diverse application areas such as mobile, autonomous vehicles, and the datacenter. This SiFive talk will preview an architecture intended to deliver a leap in performance, evidenced by industry-standard benchmarks such as SPECint, that will be the foundation of the rapid acceleration of RISC-V deployment across the most challenging application domains.

Scalable Cortex CPU Clusters with Next-Generation DynamIQ Shared Unit
Pieter Arnout, Principal FAE, Arm

During the recent decade, CPU cluster topology became a key part of SoC development to meet an increasingly diverse set of needs across multiple markets, from unleashing the performance of laptops to maximizing the efficiency of wearables. The new generation of the Arm DynamIQ Shared Unit (DSU-110) addresses this challenge. This talk will cover how to use the DSU’s scalability and capability to create flexible Arm Cortex based CPU cluster designs, for higher power efficiency and more battery life for devices.

This session will include Q&A after each presentation.

3:45pmEnd of Conference

 

Premier Sponsor

Platinum Sponsor

Gold Sponsor

Andes Technologies

Silver Sponsor

Industry Sponsor