Linley Fall Processor Conference 2018
Covers processors and IP cores used in embedded, communications, automotive, IoT, and server designs.
October 31 - November 1, 2018
Hyatt Regency, Santa Clara, CA

» Events  |  Event Info  |  Agenda Day Two  |  Register

Agenda for Day One: October 31, 2018
View Day Two

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-9:50amKeynote

Keynote: Breaking New Bottlenecks in Processor Design
Linley Gwennap, Principal Analyst, The Linley Group

As emerging workloads such as AI and IoT place greater demands on processor designers, they are running into a series of old and new bottlenecks. Boosting compute performance in AI accelerators is relatively easy, but moving data in and out of memory and through the chip is often the bottleneck. Particularly in data centers, moving data between systems and reliably storing it creates additional problems. Security didn't seem like a performance limiter until users installed the patches for Meltdown/Spectre. Power efficiency is paramount for nearly every application from IoT to servers. This presentation will discuss the problems facing processor designers and some of the new approaches they are using to break these bottlenecks.

There will be Q&A discussion featuring above speaker.

9:50am-10:10amBREAK - Sponsored by Micron
  Track A Track B
10:10am-12:15pmSession 1: AI SoC Design

AI processors employ complex heterogeneous architectures with high data parallelism. To meet performance requirements, designers must optimize the data flow between processor cores, and ensure that the interface to external memory meets the high-bandwidth demands of deep neural-networks. This session, moderated by The Linley Group senior analyst Mike Demler, will discuss how using low-latency DNN accelerators, AI-optimized interconnect, and 3D packaging can improve AI processor performance.

HW Enablement for Machine Learning: From Devices to Systems
Igor Arsovski, CTO of ASIC BU, GLOBALFOUNDRIES

This presentation will start with a broad overview of hardware optimization for machine learning from energy-efficient MAC units that increase compute capability to advanced packaging techniques that increase device count and system-interconnect bandwidth. It will then focus on 3D memory integration, showcasing a 3D SRAM with close to 4x improvement in memory capacity over embedded memory while also showing lower power and latency with up to 8x higher bandwidth than current off-chip memories.

An Advanced DSP Architecture for Neural Networks and Audio Processing
Sachin Ghanekar, Design Engineering Group Director, Cadence

Following the popularity of digital home assistants such as Amazon's Alexa, voice-controlled user interfaces are increasingly important to manufacturers. Advanced DSP algorithms are rapidly evolving to eliminate noise and isolate the speakers' voice to improve understanding. Additionally, neural-network-based speech-recognition algorithms perform more tasks locally due to concerns of latency and privacy. This presentation will introduce a new DSP designed to support the most advanced DSP and NN processing to simplify design, reduce cost and power, and shorten time to market.

An AI-Enabled Platform for Designing SoCs for AI Applications
Anush Mohandass, Chief Operating Officer, NetSpeed Systems

AI is infiltrating many applications including vision, speech, forecasting, robotics, and diagnostics, and it is driving sweeping changes in computational architectures and SoC design. Inside these SoCs is a new data flow that requires efficient 'any-to-any' data exchanges among a multitude of compute elements. This presentation will show how advanced features such as multicast/broadcast can be implemented to improve performance and efficiency in AI-enable SoCs and accelerator ASICs used for data centers, autonomous vehicles, AR/VR, and advanced video analytics.

Implementing Flexible Interconnect Topologies for Machine Learning Acceleration
Benoit de Lescure, Vice President, Technology, ArterisIP

Interconnect technology is a key element for implementing efficient systems-on-chip (SoC) for neural-network training. When replicating homogeneous processing elements in CNN and RNN accelerators, designers often use regular interconnect topologies such as rings, meshes and 3D tori. This presentation will introduce a new software product that efficiently implements regular and irregular network-on-a-chip (NoC) topologies, allowing design teams to more quickly create custom interconnects for neural-network and machine-learning SoCs while optimizing power consumption, performance and area.

There will be Q&A and a panel discussion featuring above speakers.

Session 2: Accelerating Networking and Storage

As storage and network bandwidths climb, the importance of offloading I/O processing also increases. Offload architectures include both CPU-managed look-aside designs and in-line approaches that are transparent to the CPU. This session, led by The Linley Group principal analyst Bob Wheeler, will examine several approaches to offloading and accelerating specific workloads including encryption and network-flow processing.

Eliminating the Gaps in Securing Data from Creation-to-Use
Bob Doud, Sr. Director of Marketing, Mellanox

A long-standing tenet of good security is to protect data as close to its source as possible. Although "self-encrypting" drives have become commonplace, they only protect against certain threats and require the user to trust the system in which they are installed. This presentation will discuss alternative architectures for data-at-rest security, including encrypting data at the server with acceleration and isolation on the network interface card (NIC).

Line-Rate Link Protection Beyond 100Gbps
Maxim Demchenko, Silicon IP Solutions Architect, Inside Secure

Data communication is performed through numerous high-speed data links that require protection. There is a security demand over the complete spectrum including high-speed data-center interconnects, high-capacity transport networks, various links deployed in 5G wireless infrastructure, automotive, and AI. Security isn't limited to Ethernet or long-distance communication, considering interfaces such as PCIe can be exposed to similar threats. This presentation will describe how Inside Secure's Silicon IP solutions protect links at rates up to and over 800Gbps.

Improving Server Efficiency by Optimizing the Network Interface and Its Memory
Salman Jiva, Sr. Business Development Manager, CNBU, Micron

SmartNICs increase the efficiency of server cores by off-loading network functions that would typically require thousands of clock cycles, whether a packet was forwarded or not. To hold a large number of networking flows, SmartNICs include relatively large memory on-chip and on the board. This presentation will discuss the breadth of memory offerings and their tradeoffs that maximize the value proposition of the SmartNIC. These tradeoffs include bandwidth, density, power, and cost, which in turn directly translate to improved server efficiency.

An Open, Composable Architecture for Best-in-Class Domain-Specific Accelerators
Niel Viljoen, Chief Executive Officer & Founder, Netronome

Domain-specific accelerators in heterogeneous servers are quickly becoming mainstream; edge computing will further constrain power and form factors. Heterogeneous integration (combining multiple die/IP into one package) can reduce development costs. However, current approaches are closed or not scalable, and hinders flexibility and fast innovation. This session will present an open heterogeneous architecture that enables disaggregation and economies of scale for accelerators. Best-in-class, power-efficient high-performance accelerators can be rapidly composed at low cost by integrating best-of-breed components from multiple vendors.

There will be Q&A and a panel discussion featuring above speakers.

12:15pm-1:30pmLUNCH - Sponsored by Synopsys
1:30pm-3:00pmSession 3: Open-Source CPUs

The open-source RISC-V architecture has quickly become popular, in part because it lacks both license fees and restrictions on modification. Thus, the architecture can be quickly adapted to solve new problems, such as solving security flaws in existing CPU designs. This session, moderated by The Linley Group principal analyst Linley Gwennap, discusses how three different companies are creating RISC-V CPU designs and applying them to solve different types of problems.

Bringing AndeStar Advantages to RISC-V Processors
Justin Tseng, Vice President of RD-VLSI and M&S, Andes Technology

RISC-V, an open instruction-set architecture, has gained momentum and rapidly evolved into a new mainstream processor technology with a rich ecosystem and a fast-growing number of real-world implementations. Andes, an experienced vendor of CPU processors and solutions, is bringing the advantages of its AndeStar technologies to RISC-V. This presentation will provide an update of the technology porting status, advantages, and upcoming product plans.

Opportunities and Challenges of Building Silicon in the Cloud
Yunsup Lee, CTO, SiFive

The semiconductor industry faces rising costs and bourgeoning demand for custom silicon. Developing and prototyping new semiconductor products typically requires extensive chip-design expertise and many months of effort by a large engineering team. This presentation will discuss a new design flow that allows a chip to be prototyped with reduced time and design expertise and will include a demo as we present the opportunities and challenges of building chips in the cloud.

Secure Processing After Meltdown and Spectre
Ben Levine, Senior Director of Product Management, Rambus

Meltdown and Spectre exposed severe security flaws in modern processors. Making a highly complex processor, optimized for performance, into a secure processor is very difficult. Instead, security-sensitive code should be moved into secure hardware cores away from general-purpose processors. These cores can utilize processors optimized for secure operation and can implement a wide range of hardware protection against attacks. This presentation will show an example of a secure core implemented using a custom-designed RISC-V CPU targeted specifically for security.

There will be Q&A and a panel discussion featuring above speakers.

Session 4: IoT System Design

Designing connected devices for the Internet of Things requires low-power chips that nevertheless enable strong performance, high integration, and wireless networking. Among the possible solutions are better embedded processors, innovative FPGAs, and power-efficient global navigation. This session, moderated by The Linley Group senior analyst Tom R. Halfhill, will discuss new options for IoT designers.

Scalable FPGA Platform for High-Volume Products
Sammy Cheung, Founder, President, and CEO, Efinix

FPGAs serve applications ranging from glue logic in mobile devices to high-performance data-center acceleration. However, the FPGA's high cost, high power, and cumbersome programming models limit high-volume deployment. This presentation will discuss Efinix's Quantum FPGA technology, which is IC-process agnostic, architecturally stable for numerous applications, and provides better performance, power, and area than traditional FPGAs. These advantages enable low-power cost-efficient FPGAs, embedded FPGA IP, and high-volume programmable ASICs.

Deterministic Network and I/O Connectivity Requirements of the Industrial IoT
Jeff Steinheider, Digital Networking, Product Manager, NXP

Manufacturers are tearing down the walls separating information technology (IT) and operation technology (OT), implementing an Industrial Internet of Things (IIoT) as part of a broader Industry 4.0 transformation. Critical to this transformation are four technologies: connectivity, processing, human-machine interfaces, and security. NXP's LS1028A Layerscape processor addresses all four of these. This presentation publicly discloses for the first time the whole family of LS1028A processors and focuses on connectivity required by IIoT designs, including Ethernet-based Time-Sensitive Networking (TSN).

Low-Power GNSS for Tomorrow’s IoT SoCs
Richard Edgar, Director of Communications Technology, Ensigma, Imagination

The Internet of Things (IoT) promises to revolutionize the way that people interact with everyday devices; at home, at work, in the car, everywhere. Many of these applications require knowledge of the IoT device's location, using GPS technologies to improve accuracy. However, this requirement creates challenges for equipment vendors: How to keep the device small and mobile? How to minimize the impact on the battery in the device?

There will be Q&A and a panel discussion featuring above speakers.

3:00pm-3:20pmBREAK - Sponsored by Micron
3:20pm-4:30pmSession 5: Mobile AI Design

Functions such as Face ID and computational photography have driven the need for on-device AI processing in smartphones. To run such applications efficiently, Google introduced a neural-network API that automatically selects the best core for a given workload. This session, moderated by Linley Group senior analyst Mike Demler, will discuss how combining a CPU, GPU, and DSP with specialized neural-network accelerators reduces running power consumption for on-device AI applications.

Qualcomm (TBA)
Travis Lanier, Sr. Director, Product Management, Qualcomm

TBA

Reducing Power and Accelerating Neural-Network Performance in Android Devices
Allen Watson, Product Marketing Manager, Synopsys

In Android 8.1, Google added the Neural Network API (NNAPI) to its mobile operating system. NNAPI is designed to simplify the offloading of neural networks to accelerators such as GPUs and DSPs. This presentation will show how NNAPI, TensorFlow Lite, and embedded vision processors work together to increase neural-network performance while reducing power consumption. Power and performance comparisons between running neural networks on an application processor versus an accelerator will be discussed.

There will be Q&A and a panel discussion featuring above speakers.

Session 6: IoT Security

The Internet of Things is a great opportunity for new products, but it potentially opens billions of doors to malicious hackers who want to penetrate secure networks. The good news is that IoT devices needn't be defenseless; the bad news is that strong security is a start-to-finish design imperative. This session, moderated by The Linley Group senior analyst Tom R. Halfhill, will discuss multiple solutions that resist direct attacks, side-channel exploits, and even future vulnerabilities that quantum computers may expose.

Addressing Small-Processor Security Today and Tomorrow
Derek Atkins, Chief Technology Officer, SecureRF

Securing small processors in the IoT is challenging. Many devices powering the IoT lack the resources to run mainstream cryptographic protocols and with the arrival of quantum computing, protecting your solutions will become significantly harder. This presentation will introduce quantum-resistant protocols, now part of NIST's review, based on well-studied Group Theoretic Cryptography. These protocols are fast and small enough to fit on today's lowest-resource devices. Topics will include implementation examples and a description of available tools.

Physically Securing Your IoT Device at the Silicon Level
Diya Soubra, Director Business Development, Arm

Billions of IoT devices are expected to densely populate our homes, cities, and industries; many of these will involve valuable data and physical proximity, making them ripe for silicon attacks. To date, IoT attack mitigation has been broadly focused around software counter-measures, but with physical attacks becoming easier and cheaper, we must increase our focus on silicon security. This presentation will discuss the importance of physical security and countermeasures, including the new Arm Cortex-M35P processor and side-channel mitigation IP.

There will be Q&A and a panel discussion featuring above speakers.

4:30pm-6:00pmReception and Exhibits - Sponsored by Synopsys

 

Premier Sponsor

Platinum Sponsor

Micron

Gold Sponsor

NetSpeed Systems

Andes Technologies

Industry Sponsor