| Order a report

A Guide to Processors for Deep Learning

Second Edition

To be Published February 2019
Purchase by January 31 and take $300 off the list price

Authors: Linley Gwennap and Mike Demler

Single License: $4,495 (single copy, one user)
Corporate License: $5,995

Pages: ~180

Ordering Information



Take a Deep Dive into Deep Learning

Deep learning, also known as artificial intelligence (AI), has seen rapid changes and improvements over the past few years and is now being applied to a wide variety of applications. Typically implemented using neural networks, deep learning powers image recognition, voice processing, language translation, and many other web services in large data centers. It is an essential technology in self-driving cars, providing both object recognition and decision making. It is even moving into client devices such as smartphones and embedded (IoT) systems.

Even the fastest CPUs are inadequate to efficiently execute the highly complex neural networks needed to address these advanced problems. Boosting performance requires more specialized hardware architectures. Graphics chips (GPUs) have become popular, particularly for the initial training function. Many other hardware approaches have recently emerged, including DSPs, FPGAs, and dedicated ASICs. Although these solutions promise order-of-magnitude improvements, GPU vendors are tuning their designs to better support deep learning.

Autonomous vehicles are an important application for deep learning. Vehicles don't implement training but instead focus on the simpler inference tasks. Even so, these vehicles require very powerful processors, but they are more constrained in cost and power than data-center servers, requiring different tradeoffs. Several chip vendors are delivering products specifically for this application; some automakers are developing their own ASICs instead.

Large chip vendors such as Intel and Nvidia currently generate the most revenue from deep-learning processors. But many startups, some well funded, have emerged to develop new, more customized architectures for deep learning; Graphcore, Habana, and Wave are among the first to deliver products. Eschewing these options, leading data-center operators such as Amazon, Google and Microsoft have developed their own hardware accelerators. In addition, several IP vendors offer specialized cores for deep learning, mainly for inference in autonomous vehicles and other client devices.

We Sort Out the Market and the Products

A Guide to Processors for Deep Learning covers hardware technologies and products. The report provides deep technology analysis and head-to-head product comparisons, as well as analysis of company prospects in this rapidly developing market segment. Which products will win designs, and why? The Linley Group’s unique technology analysis provides a forward-looking view, helping sort through competing claims and products.

The guide begins with a detailed overview of the market. We explain the basics of deep learning, the types of hardware acceleration, and the end markets, including a forecast for both automotive and data-center adoption. The heart of the report provides detailed technical coverage of announced chip products from AMD, Eta Compute, Graphcore, GreenWaves, Gyrfalcon, Intel (including former Altera, Mobileye, Movidius, and Nervana technologies), Mythic, NXP, Nvidia (including Tegra and Tesla), Qualcomm, Wave Computing, and Xilinx. It also covers IP cores from AImotive, Arm, Cadence, Cambricon, Ceva, Imagination, Synopsys, Videantis, and the open-source NVDLA. Other chapters cover Google’s TPU family of ASICs and Microsoft’s Brainwave. Finally, we bring it all together with technical comparisons in each product category and our analysis and conclusions about this emerging market.

Make Informed Decisions

As the leading vendor of technology analysis for processors, The Linley Group has the expertise to deliver a comprehensive look at the full range of chips designed for a broad range of deep-learning applications. Principal analyst Linley Gwennap and senior analyst Mike Demler use their experience to deliver the deep technical analysis and strategic information you need to make informed business decisions.

Whether you are looking for the right processor or IP for an automotive application or a data-center accelerator, or seeking to partner with or invest in one of these vendors, this report will cut your research time and save you money. Make the smart decision: order A Guide to Processors for Deep Learning today.

This report is written for:

  • Engineers designing chips or systems for deep learning or autonomous vehicles
  • Marketing and engineering staff at companies that sell related chips who need more information on processors for deep learning or autonomous vehicles
  • Technology professionals who wish an introduction to deep learning, vision processing, or autonomous-driving systems
  • Financial analysts who desire a hype-free analysis of deep-learning processors and of which chip suppliers are most likely to succeed
  • Press and public-relations professionals who need to get up to speed on this emerging technology

This market is developing rapidly — don't be left behind!

What's New in This Edition

The second edition of A Guide to Processors for Deep Learning covers dozens of new products and technologies announced in the past year, including:

  • Nvidia’s new Tesla T4 (Turing) accelerator for inference
  • Arm’s first machine-learning acceleration IP
  • Intel’s Myriad X chip, with a new neural engine, for embedded systems
  • Ceva’s NeuPro, a customized IP core for deep learning
  • Intel’s VNNX instruction-set extensions for accelerating AI inference
  • AMD’s MI60 Radeon Instinct accelerator based on the 7nm Vega chip
  • Imagination’s new PowerVR 3NX deep-learning accelerators
  • A detailed analysis of Microsoft’s FPGA-based Brainwave accelerator
  • Graphcore’s first product, the GC2 accelerator card
  • Eta Compute’s spiking neural-network accelerator
  • Details on Mythic’s analog-compute technology for neural networks
  • Cadence’s Vision Q6 and DNA 100 AI cores
  • AImotive’s third-generation AI core for autonomous vehicles
  • Qualcomm’s Hexagon 690, which adds a neural engine for a 3x gain
  • The open-source NVDLA, based on Nvidia’s proven Xavier design
  • Products from Cambricon, China’s leading AI-acceleration startup
  • Videantis’s new v-MP6000UDX computer-vision IP
  • Details on Google’s TPUv2 and TPUv3
  • Other new vendors such as BrainChip, Cornami, GreenWaves, Habana, NovuMind, and SambaNova
  • The new AI-Benchmark and MLPerf tests
PRELIMINARY TABLE OF CONTENTS
List of Figures
List of Tables
About the Authors
About the Publisher
Preface
Executive Summary
1 Deep-Learning Technology
2 Deep-Learning Applications
3 Deep-Learning Accelerators
4 Market Forecast
5 AImotive
6 AMD
7 Arm
8 Cadence
9 Ceva
10 Google
11 Graphcore
12 Gyrfalcon
13 Imagination
14 Intel
15 Intel Mobileye
16 Microsoft
17 Mythic
18 NVDLA
19 Nvidia Tegra
20 Nvidia Tesla
21 NXP
22 Qualcomm
23 Synopsys
24 Videantis
25 Wave Computing
26 Xilinx
27 Other Vendors
Abee
Amazon
BrainChip
Cambricon
Cerebras
Cornami
eSilicon
Eta Compute
General Processor
GreenWaves
Groq
Habana
Huawei
NovuMind
SambaNova
28 Processor Comparisons
29 Conclusions
Appendix: Further Reading
Index
--
This is a preliminary table of contents and is subject to change

Events

Linley Spring Processor Conference 2019
April 10 - 11, 2019
Santa Clara, CA
Linley Fall Processor Conference 2019
October 23 - 24, 2019
Santa Clara, CA
More Events »

Newsletter

Linley Newsletter
Analysis of new developments in microprocessors and other semiconductor products
Subscribe to our Newsletter »