Linley Spring Processor Conference 2018

April 11 - 12, 2018
Hyatt Regency, Santa Clara, CA

» Events  |  Event Info  |  Agenda Day Two  |  Proceedings

Agenda for Day One: April 11, 2018
View Day Two

Click arrow below to view presentation abstract or click here -> to view all abstracts
9:00am-10:00amKeynote

How Well Does Your Processor Support AI?
Linley Gwennap, Principal Analyst, The Linley Group

After starting in the data center, AI is now expanding through the edge to clients, automobiles, embedded systems, and even IoT nodes. The question is no longer whether a processor should support AI but what kind of acceleration it provides. This presentation will discuss which applications are driving demand for neural networks, which architectures best accelerate these applications, and which vendors offer processors and IP for these applications. It will also discuss processors for traditional data center and IoT workloads.

10:00am-10:20amBREAK – Sponsored by Micron
10:20am-12:00pmSession 1: Data Center

Data-center services are evolving from simple web functions to voice interfaces, image searches, content filtering, and data mining. Deep neural networks (AI) are being broadly deployed to support these services and sift through massive pools of "big data" swiftly and efficiently. This session, moderated by The Linley Group principal analyst Bob Wheeler, will discuss how server designers can improve the processing, memory, and I/O capabilities of their systems to address these changing workloads.

Training a Neural Network Using Dataflow Computing
Chris Nicol, SVP and CTO, Wave Computing

Training and inferencing of deep neural network graphs using the classic Von Neumann architectural approach is inefficient. Heterogeneous computing models that rely on CPUs and accelerators are bound by synchronization and communication bottlenecks that lower effective utilization. Customers are demanding new system architectures that process neural networks faster and more efficiently. This presentation will explain how replacing the old CPU+coprocessor paradigm with a new dataflow approach can enable a scalable platform for inferencing and training of deep neural networks.

Addressing Process Scaling Challenges in Server Memory
Eric Caward, Business Development Manager, Micron

Servers demand memory that is continually higher in performance and lower in power. To support these demands, the manufacturing technology is quickly moving to sub-20nm. DRAM refresh-related single-bit errors continue to dominate memory "failures." But given appropriate system design, these highly unpredictable yet correctable events can be adequately addressed, even for leading-edge memory. This presentation will discuss the performance and tradeoffs of implementing ECC and the reliability concerns of refresh-related single bits prevalent when ramping new process nodes.

Advancing Data Center Storage
Mohit Gupta, Senior Director Product Marketing, Rambus

In our increasingly connected world, petabytes of data are continuously generated by a wide range of devices, systems and endpoints. The resulting digital tsunami has prompted data center heavyweights to require a massively increasing amount of storage and, even with current NVMe options, the PCIe interconnect has been a laggard in keeping pace. Fortunately, the next generation PCIe is here and this talk covers best practices for creating a successful PCIe 4 design and discusses the architectural changes that will come with PCIe Gen 5.

There will be Q&A and a panel discussion featuring above speakers and guest panelist Kirk M. Bresniker, Chief Architect, Hewlett-Packard Enterprise.

12:00pm-1:10pmLUNCH – Sponsored by Synopsys
1:10pm-2:40pmSession 2: SoC Design

As processor designs become more complex, designers are tapping into a broad range of third-party IP cores and tools. CPU and GPU cores handle the main processing in most designs, and designers have many options to choose from. But performance optimization, debugging, and timing closure become more difficult as chips include more cores. This session, moderated by The Linley Group principal analyst Linley Gwennap, will discuss IP options for SoC designers and tools to help streamline the design process.

Capturing the Mainstream Market with Premium Experiences
Andy Craigen, Director, Product Management, Arm

The compute performance in GPUs and CPUs has revolutionized experiences in mobile devices. However, new experiences need to move beyond premium devices into the mainstream to accelerate both adoption and technological advancement. This presentation will discuss the latest Arm solutions that bring high-end experiences like machine learning and augmented reality to the mainstream market. Learn how GPU capabilities, combined with new DynamIQ CPU configurations, will enable an optimal design with area savings, scalable performance levels, and efficiency for high-volume markets.

Interconnect IP Physical Awareness and Optimization
Matthew Mangan, Corporate Applications Engineer, ArterisIP

The SoC Interconnect is one of the most important IPs in an SoC, as it is the logical and physical instantiation of the architecture, carrying virtually all data traffic. With 16nm processes, timing closure becomes a major challenge, as some valid logical architectures are not timing-closable during place-and-route. This presentation describes an Interconnect Physical Optimization technology as it has been used on a complex, multi-domain SoC to optimize its architecture from a physical point of view.

Hardware Monitors Provide Tools for Optimization at Scale
Gajinder Panesar, CTO, UltraSoC

Embedded monitoring and analytics hardware within an SoC allows collection of high-granularity data on the real-world behavior of both the chip and the wider system. This presentation outlines the use of such hardware-based infrastructures to support performance optimization in high-performance computing environments. The hardware-based approach can detect hard-to-identify issues such as contention and cache coherence. Compared with traditional solutions such as sampling profilers or application- and system-level instrumentation, this approach makes it substantially easier to locate non-fatal bugs.

There will be Q&A and a panel discussion featuring above speakers.

2:40pm-3:00pmBREAK – Sponsored by Micron
3:00pm-4:40pmSession 3: Autonomous Cars

To navigate safely, autonomous vehicles must create 3D models of their surroundings in real time. Deep-learning processors running high-performance neural networks power these advanced computer-vision systems, enabling identification of pedestrians, road markings, traffic signs, other vehicles and objects in images streamed from multiple cameras and sensors. This session, moderated by The Linley Group senior analyst Mike Demler, will describe deep-learning processor architectures that are enabling the development of autonomous vehicles.

A Scalable Platform for Deep Learning and Visual Compute
Marco Jacobs, VP Marketing, Videantis

Deep learning can bring powerful intelligent sensing to our next-generation vehicles, phones, AR/VR, and smart cameras. Videantis has developed a new fine-grain scalable processor platform that efficiently runs the full processing chain for a variety of visual-computing tasks: deep learning, computer vision, video coding, or imaging. This presentation will highlight the processor design choices made, the development tools to map full applications onto the architecture, and show key example applications.

The Journey from ADAS to Autonomous
Igor Arsovski, CTO of ASIC BU, GLOBALFOUNDRIES

The car-electronics architecture is evolving as OEMs and suppliers transition from ADAS to autonomous vehicles. Two divergent architectures have evolved to address autonomous driving. The presentation will detail the two main architectures, the challenges of processing performance and technology needs for each approach, and how choices in processor technology impact sensor design in terms of bandwidth, performance, and power. This analysis will provide insight on how a foundry customer can make the right technology choice for automotive designs.

Deep-Learning Requirements for Processors in Autonomous Vehicles
Gordon Cooper, Product Marketing Manager, Synopsys

Deep-learning techniques for embedded vision processors are critical in the push toward fully autonomous vehicles. Processors must quickly interpret content from images and react. To accomplish this, embedded vision processors must be hardware-optimized for ADAS levels of performance, achieve low power and small area, have efficient programming tools, and support required algorithms. This presentation will discuss the current and next-generation requirements for ADAS vision applications, including the need for accurate power estimation, and discuss the HW/SW evolution to meet ADAS requirements.

There will be Q&A and a panel discussion featuring above speakers.

4:40pm-6:10pmReception and Exhibits – Sponsored by Synopsys

 

Premier Sponsor

Platinum Sponsor

Micron

Gold Sponsor

NetSpeed Systems

Industry Sponsor