Linley Fall Processor Conference 2019

October 23 - 24, 2019
Hyatt Regency, Santa Clara, CA

» Events  |  Event Info  |  Agenda Day Two  |  Register

Agenda for Day One: October 23, 2019
View Day Two

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-9:50amKeynote Session:

Accelerating AI from Cloud to Edge
Linley Gwennap, Principal Analyst, The Linley Group

Deep learning, often called AI, has become an important workload for nearly every processor and end application, including data centers, automobile safety systems, smartphones, and even microcontroller-based consumer devices. This presentation will touch on the latest trends in deep-learning accelerators, such as sparse compute, spiking neural networks, and in-memory (analog) compute. It will also discuss how these accelerators are used across a range of end applications

There will be a brief Q&A with the speaker.

9:50am-10:10amBreak - Sponsored by Intel
  Track A Track B
10:10am-12:15pmSession 1: Data Center Technologies

As cloud operators adopt workload-optimized architectures, the days of homogeneous computing in the data center are gone. But optimization involves complex relationships between the CPU, memory, accelerators, network, and storage. This session, led by The Linley Group principal analyst Bob Wheeler, will examine several angles of workload performance.

Server Processors Require Targeted Application Differentiation
Gopal Hegde, VP & GM, Server Processor Business Unit, Marvell

The server market is evolving from generic processors for general-purpose compute to products that offer workload optimization for targeted applications. This presentation will discuss how the ThunderX2 processors use the Arm architecture to deliver differentiated performance for critical workloads and enable new use cases for the cloud and at the edge. It will also include an update on the ThunderX roadmap and how Marvell is evolving the ThunderX product to continue to offer differentiated value to data center customers.

A New Generation of Processors and SmartNICs
Kevin Deierling, Sr. Vice President of Marketing, Mellanox

The end of Moore's Law is turning conventional data center architectures upside down. Instead of ever faster processors, the industry is turning to clusters of specialized domain-specific processors. Networks are evolving from raw bandwidth to intelligent access to compute and storage. And soon, networks will be able to think outside the computer. This presentation will cover several new networking products that are true I/O processing units that can operate as SmartNICs or even as standalone storage and security processors.

Memory in the New Era of Compute
Craig McGowan, Sr. Business Development Manager, Compute & Networking Business, Micron

The changing face of compute in the data center, the impact of 5G, and the push to the edge are placing more pressure than ever before on the traditional compute and memory systems. This increase in compute diversity means that traditional memory solutions are no longer the optimal choices they once were. This presentation will explore these changes in the compute landscape and the resulting trade-offs for current and emerging DRAM subsystems.

Next-Generation FPGAs for Accelerating Compute, Network, and Storage Workloads
Manoj Roge, Vice President of Strategic Product Planning, Achronix

With exponential growth in unstructured data, there is tremendous pressure to improve the performance and economics of the underlying infrastructure while supporting ever changing workloads. Hyperscalers are changing their architecture from CPU-centric computing to data-centric computing where heterogeneous accelerators play an important role in scaling the performance and TCO goals. Next generation FPGAs will play a vital role in addressing these challenges by increasing compute density, memory bandwidth and data mobility.

There will be Q&A and a panel discussion featuring above speakers.

Session 2: AI in Automotive

It only takes one engine to drive a car, but replacing a human behind the wheel will require powerful processors running multiple neural-network accelerators. Sensor-fusion and computer-vision algorithms run well on DSPs, but to connect those cores in heterogeneous SoCs designers need a low-latency network on chip (NoC). This session, moderated by The Linley Group senior analyst Mike Demler, will discuss AI accelerators and licensable IP for advanced automotive applications.

AI and Vision Processor for Camera-Based Automotive Use Cases
Jeff VanWashenova, Director of Marketing for Automotive Segment, CEVA

Artificial Intelligence (AI) research is rapidly progressing and evolving, but developing automotive cameras for production is a long and rigorous task. These applications combine various computer vision and imaging tasks with evolving neural networks, and they require much attention to quality, power and performance efficiency, cost, and safety. This presentation will discuss the various challenges for automotive production development, market trends, and how to ease this task with a complete and low-power processing engine with mature tools and development environment.

A New Generation of Advanced AI/Floating-Point DSPs
Graham Wilson, Senior Product Marketing Manager, Synopsys

Digital signal processors (DSPs) have evolved over decades to meet changing computation requirements. Today's algorithms need AI processing, vector floating point, and standard DSP vector operations. To support the intensive computation requirements of automotive ADAS, wireless communications, and high-end IoT applications, modern DSPs must have multiple vector pipelines customized for algorithms that utilize AI and floating point. This presentation will describe the trends and applications driving the need for higher bandwidth signal processing and new DSP IP from Synopsys that will help designers address a broad range of DSP processing.

Efficiency at Scale – 4K Image Processing in Edge Devices
Paul Master, CTO and cofounder, Cornami

Exploding datasets and evolving algorithms that require super resolution at the edges are in need of efficient scalable architectures. Current acceleration approaches such as the GPUs' SIMD have critical challenges. This presentation discloses new architectural techniques to reconfigure and scale hardware to efficiently map to higher-resolution neural networks to meet performance and power targets.

Integrating AI Accelerators into Automotive SoCs with Functional Safety
JP Loison, Corporate Application Engineer, ArterisIP

As ADAS and autonomous-driving systems become more complex, the number and complexity of hardware accelerators implementing AI processing is increasing. Integrating these processing elements into ISO 26262-compliant systems is a technical and operational challenge. This presentation describes lessons learned using network-on-chip (NoC) technology to implement automotive SoCs performing AI processing, including the use of NoC functional-safety mechanisms, external IP checking, and validation strategies.

There will be Q&A and a panel discussion featuring above speakers.

12:15pm-1:30pmLunch - Sponsored by Arm
1:30pm-3:00pmSession 3: AI in the Data Center

Most data centers today rely on GPU-based accelerators for their AI applications, but new vendors and new technologies are vying to replace these GPUs with higher-performance and more power-efficient solutions. But none of these new architectures can be successful without a complete software stack. This session, moderated by The Linley Group principal analyst Linley Gwennap, will discuss new hardware and software for accelerating AI in the data center.

A Neuromorphic Processor for Next-Generation AI
Mike Davies, Director, Neuromorphic Computing Lab, Intel

Most processors currently used to execute neural networks rely on Von Neumann architectures and matrix-multiplication accelerators. Neural networks in nature use neither. Neuromorphic architectures follow nature's fundamentally different approach by applying basic principles from neuroscience, such as fine-grain parallelism, integrated memory-and-compute, sparse and recurrent connectivity, continuous adaptation, and event-driven "spikes" for communication. This presentation describes Loihi, a neuromorphic research processor that provides compelling gains in performance and energy efficiency, scalable to orders of magnitude, for novel AI and machine learning workloads compared to conventional solutions.

Scaling AI Training Systems with the Gaudi Processor
Eitan Medina, Chief Business Officer, Habana

The demand for AI training is surging, together with exponential increase in the compute requirements. Data-centers are faced with an urgent need for improved scalability in both cost and power efficiency. One approach to enable efficient, open-standard scaling is integrating on chip 10x100GbE RoCE (RDMA over Converged Ethernet) ports. This presentation will discuss the Gaudi heterogeneous compute and networking architecture and how it can be employed in scalable training racks and clusters to address data-parallel and model-parallel training.

Using Industry-Standard Techniques to Accelerate AI Software
Andrew Richards, CEO, Codeplay

This presentation is an introduction to industry-standard approaches AI developers use to develop cutting-edge high-performance intelligent software. The latest AI techniques require huge amounts of processing power that can only be delivered by very powerful accelerator processors that create new software challenges.

There will be Q&A and a panel discussion featuring above speakers.

Session 4: Security

Security as an afterthought is thoughtless security. The industry now favors a holistic approach that builds defenses into every layer of the system stack, from the foundational hardware to the high-level software. This session, moderated by The Linley Group principal analyst Bob Wheeler, presents different approaches to meeting these security needs.

The SiFive Open Secure Platform Architecture
Dany Nativel, Security Director, SiFive

The need for secure-by-design processors in embedded systems is rising, requiring a new security architecture based on a holistic view of processor architecture. This presentation introduces a new SiFive scalable security architecture that will deliver more than a single trusted world in a configurable SoC design tailored to industry needs.

Security for AI Applications
Neeraj Paliwal, VP of Cryptography Products, Rambus

Increasing demand for high-performance AI applications is driving the development of specialized silicon and systems for AI training and inference workloads. These applications involve sensitive data and valuable IP, but strong security is often overlooked. This presentation outlines the security threats to AI applications and examines the security building blocks needed to mitigate them. It then shows how to implement strong, flexible, security for these applications using secure hardware and software.

Enabling Smarter Security on SmartNIC
Sakir Sezer, CTO, Titan IC

Emerging smart NICs combine proven network-processing technologies with dedicated offload accelerators for delivering computationally expensive network and security functions, such as IPsec, OVS, or WAF. Smart NICs are redefining cybersecurity within the cloud, enabling the monitoring and inspection of network traffic to ensure the security and integrity of cloud services and counteract threats. This presentation will discuss Titan IC's underpinning security technologies that are tailored for smart NICs.

There will be Q&A and a panel discussion featuring above speakers.

3:00pm-3:20pmBreak - Sponsored by Arm
3:20pm-4:30pmSession 5: Accelerating AI SoCs

When embedded systems such as automotive, mobile clients, and IoT devices require AI capability, they typically integrate a deep-learning accelerator into the main SoC. Several IP vendors provide configurable accelerators that run neural networks more efficiently than CPUs or GPUs can. This session, moderated by The Linley Group senior analyst Mike Demler, will present several licensable AI accelerators, contrasting their architectures and capabilities.

AI Inference at the Edge–Recent Trends and Solutions
Lazaar Louis, Senior Director, Cadence

Smartphones, AR/VR headsets, smart home, automotive, drones, and robots require high-performance and power-efficient AI processing for speech and imaging. Cadence Tensilica's HiFi, Vision and DNA processor IP are designed to meet these advanced AI requirements. This presentation will share details on their performance and power-efficiency improvements, including enhancements to the software platform.

Delivering Scaled-Up ML Performance for Mobile, Embedded, and Automotive
Raviraj Mahatme, Sr Product Manager, ML Group, Arm

Machine learning (ML) using deep neural networks (DNNs) is an emerging field transforming object detection, classification, and speech processing. This presentation describes Arm's ML processor, which accelerates DNNs using a custom architecture. It will explain how this processor, as part of the heterogeneous Arm ML platform, can help you implement tailored solutions for scale-out performance with high efficiency for mobile, infrastructure, automotive, and embedded devices.

There will be Q&A and a panel discussion featuring above speakers.

Session 6: AI for IoT Devices

Although data-center servers are best equipped to offload AI workloads from remote clients, local processing is preferred when network connectivity is constrained or absent. Also, some light workloads are better executed on the client or network edge when latency is critical. This session, moderated by The Linley Group senior analyst Tom R. Halfhill, highlights two chips that efficiently handle machine learning on IoT devices that require very little power.

Practical Machine Learning at the Extreme Edge
Semir Haddad, Sr Director, Product Marketing, Eta Compute

Machine learning has the potential to revolutionize the capability of the billions of devices at the extreme edge that perform most of the sensing and actuation in our daily life. These devices operate on microcontrollers with severe power constraints, limited performance, and no or very simple operating systems. This presentation will discuss the power-performance tradeoff and software requirements needed to bring machine learning to the extreme edge and how the Tensai SoC addresses these challenges.

Machine Learning on a Tiny Low-Power FPGA
Hoon Choi, Fellow, R&D, Lattice Semiconductor

This presentation describes a machine-learning solution implemented on a 5.4mmlow-power FPGA. It performs multiple key-phrase detection, hand-gesture classification, human detection and counting, local face identification, and location-feature extraction. The engine structure can fit into less than 5,000 LUTs and still can support various-sized networks. We also describe the network selection and optimization we used to make the design suitable for low-power and low-cost edge applications.

There will be Q&A and a panel discussion featuring above speakers.

4:30pm-6:00pmReception and Exhibits – Sponsored by Intel

 

Platinum Sponsor

Gold Sponsor

Micron

Industry Sponsor

Media Sponsor