Linley Spring Processor Conference 2019

April 10 - 11, 2019
Hyatt Regency, Santa Clara, CA

» Events  |  Event Info  |  Agenda Day One  |  Register

Agenda for Day Two: April 11, 2019
View Day One

Click arrow below to view presentation abstract or click here -> to view all abstracts

After Meltdown and Spectre: Security Concerns Facing Contemporary Microarchitectures
Jon Masters, Computer Architect, Red Hat


There will be Q&A following this presentation.

10:00am-10:20amBREAK – Sponsored by Intel
10:20am-12:00pmSession 4: AI in Data Center

Data-center services are evolving from simple web functions to voice interfaces, image searches, content filtering, and data mining. Deep neural networks (AI) are being broadly deployed to support these services and sift through massive pools of "big data" swiftly and efficiently. This session, moderated by The Linley Group principal analyst Bob Wheeler, will discuss how server designers can improve the processing, memory, and I/O capabilities of their systems to address these changing workloads.

DL Boost: Embedded AI Acceleration in Intel Xeon Scalable CPUs
Ian Steiner, Xeon Scalable CPU Lead Architect, Intel

Deep-learning inference has emerged as a critical compute component in today's data centers. In the recently launched Xeon Scalable CPU, Intel added the DL Boost (VNNI) extensions to accelerate INT8 deep-learning inference. To support these new hardware capabilities, Intel is making significant engineering investments to develop the open-source software ecosystem. This presentation will provide a deep dive into the performance behaviors of a popular DL topology running on top of Intel MKL-DNN using VNNI.

Intel Nervana Neural Network Processor: Redesigning AI Training Silicon
Carey Kloss, VP and GM of AI Hardware, Intel

AI computational advances have surged, but memory is now limiting the performance and capacity of hardware architectures. Breaking through this memory barrier drove an entirely new approach to the Nervana Neural Network Processor for Learning (NNP-L), built from the ground up to accelerate deep learning. This presentation will discuss the architecture underlying the NNP-L and how this chip optimizes compute, memory, and interconnects to provide higher utilization and better accelerate deep-learning training.

DDR5: Mainstream Memory That Maximizes Effective Bandwidth
Brian Drake, Senior Business Development Manager, Micron

The "data economy" is driving demand for higher-bandwidth memory due to increasing CPU core counts, frequency, and IPC. The explosion in compute capability magnifies the pressure on memory and storage, requiring more bits and higher bandwidth. Tiered solutions of memory and storage are the reality of the future. This presentation explains how DDR5 will make a difference for compute-intensive applications and provides examples of how DDR5 improves performance on specific workloads and enables real-world bandwidth improvements.

There will be Q&A and a panel discussion featuring above speakers.

12:00pm-1:20pmLUNCH – Sponsored by Arm
1:20pm-2:30pmSession 5: SoC Design

Integration of heterogeneous IP blocks presents many SoC-implementation challenges. Designers expect plug and play IP, but each product is different, so there's always some customization required. Employing a network-on-chip (NoC) can ease integration, as can use of configurable silicon-proven cores. This session, moderated by Linley Group senior analyst Mike Demler, will discuss the benefits of NoC IP and design platforms for complex ASICs.

Opposites Attract: Customizing and Standardizing IP Platforms for ASIC Differentiation
Carlos Macian, Senior Director AI Strategy and Products, eSlicon

IP cores are fundamental building blocks of modern ASICs, often providing a competitive edge in spite of their standard nature. And yet, true differentiation and optimization mandates IP customization for the specific product needs. The challenge is how to combine standardization and ease of integration for an accelerated and predictable schedule with the need to optimize the IP. This presentation will explore an approach to this problem using best-in-class, silicon-proven IP that is also designed for ease of integration and application-specific customization.

Adapting SoC Architectures for Types of Artificial-Intelligence Processing
Matthew Mangan, Applications Engineer, ArterisIP

"AI" is often used abstractly to refer to systems (and chips) that implement machine-learning algorithms. But different types of chips are required for different types of AI/ML processing, whether for neural-network training or inference, or for a data center or battery-powered client. This presentation describes how to use network-on-chip (NoC) technology to efficiently implement SoC architectures targeting different types of AI processing, including advanced techniques such as when to use tiling or cache coherence.

There will be Q&A and a panel discussion featuring above speakers.

2:30pm-2:50pmBREAK – Sponsored by Intel
2:50pm-4:00pmSession 6: DSP Cores

Automotive lidars and radars produce signals that are very different than cellular radios, but they have similar requirements for high-performance DSPs. Both demand low latency, high parallelism, and throughout to handle multiple antennas and receivers, along with embedded CPUs to handle control functions. This session, moderated by Linley Group senior analyst Mike Demler, will describe two new DSP-IP cores that can handle complex signal-processing talks in automotive, IoT, and other challenging applications.

A Multipurpose Hybrid DSP and Controller Architecture for IoT and Wireless
Uri Dayan, Team Leader, Processor Architecture,, CEVA

The new CEVA-BX architecture can combine DSP and control tasks on a single processor. At the real-time control domain, cellular-IoT and wireless applications benefit from executing the L1 control and sensor-fusion on the same processor, with security often being an additional requirement. At the signal-processing domain, noise reduction and speech recognition require a strong processor with low-latency DSP processing. The presentation will describe how the new architecture performs efficiently for these and other modern DSP applications.

High Resolution, Low-Power, Programmable DSPs Optimized for Radar Sensors
Pierre-Xavier Thomas, Engineering Group Director, Cadence

Radar technology plays a critical role in automotive applications such as autonomous driving, advanced driver assistance systems (ADAS), in-cabin monitoring, and gesture recognition. These applications require increased performance and capabilities from the radar module to accurately determine the distance, direction, and speed of multiple targets. This presentation will show how the complex algorithms required for high-resolution mm-wave radar receivers can be efficiently implemented in a simple subsystem using a DSP optimized for radar signal processing.

There will be Q&A and a panel discussion featuring above speakers.



Platinum Sponsor

Gold Sponsor


Andes Technologies

Media Sponsor