Our privacy policy has been updated to incorporate GDPR considerations. See our updated policy here.
Order a Guide Order a Guide Order a Guide

Most Recent Guides & Forecasts

Communications Semiconductor Market Forecast 2019-2024
Provides five-year revenue forecasts for many categories of communications semiconductors, including Ethernet products, embedded and server processors, and FPGAs.
Communications Semiconductor Market Share 2019
Provides market share data for many categories of communications semiconductors, including Ethernet products, processors, and FPGAs.
A Guide to Processors for Deep Learning
Covers processors for accelerating deep learning, neural networks, and vision processing for AI training and inference in data centers, autonomous vehicles, and client devices.
A Guide to Multicore Processors
Covers 32- and 64-bit embedded processors with four or more CPU cores that are used for wires and wireless communications, storage, security, and other applications.
A Guide to Ethernet Switch and PHY Chips
Covers data-center switch chips for 10G, 25G, 40G, 50G, and 100G Ethernet. Also includes physical-layer (PHY) chips for 10GBase-T and 100G Ethernet.
A Guide to Processors for IoT and Wearables
Covers processors and connectivity chips for IoT clients and wearable devices, focusing primarily on single-chip solutions with integrated radios.

More Guides »

White Papers

Building Better AI Chips
As progressing to 7nm and beyond becomes ever more complex and expensive, GlobalFoundries is taking a different approach to improving performance by enhancing its 12nm node with lower operating voltages and new IP blocks. The changes are particularly effective for AI (neural-network) accelerators. The new 12LP+ technology builds on the success that the foundry’s customers have already achieved in AI acceleration.
Unified Inference and Training at the Edge
As more edge devices add AI capabilities, some applications are becoming increasingly complex. Wearables and other IoT devices often have multiple sensors, requiring different neural networks for each sensor, or they may use a single complex network to combine all the input data, a technique called sensor fusion. Others implement on-device training to customize the application. The GPX-10 processor can handle these advanced AI applications while keeping power to a minimum.
Deterministic Processing for Mission-Critical Applications
Time-sensitive applications such as automotive and robotics require fast and consistent response times. Features such as caches and branch prediction hamper responsiveness. SiFive CPUs can disable these features to deliver deterministic responses. They also combine Linux and RTOS CPUs in the same cluster to enhance responsiveness while offering smaller die area than competitors.
C-V2X Drives Intelligent Transportation
This white paper describes the benefits that cellular vehicle-to-everything (C-V2X) technology will provide by enabling vehicles to communicate directly with each other (V2V), transportation infrastructure (V2I), and network-connected service providers (V2N).
More White Papers »
Linley Reports Linley Reports Subscribe to our Reports

October 26, 2020

Arm’s SVE2 Enables Shift From Neon
Arm’s second-generation Scalable Vector Extension (SVE2) will enable a transition from the decade-old Neon technology, lifting SIMD performance in smartphones.
Think Silicon Spins AI Accelerator
Think Silicon’s licensable Neox cores combine RISC-V with nonstandard extensions that accelerate graphics and neural-network inference. Customers can add their own instructions.
Editorial: Shopping for Chip Companies
AMD is reportedly negotiating a $30 billion bid for Xilinx, the leading FPGA vendor. The two companies have little product or end-market synergy; the deal seems driven more by financial concerns.

October 19, 2020

Neoverse Advances SIMD for HPC
Arm’s Neoverse V1 core adopts the Scalable Vector Extension, replacing the venerable Neon core. It’s the company’s first CPU purpose built for high-performance computing.
SiPearl Develops Arm HPC Chip
SiPearl is a little company with a big goal: helping Europe wean itself off of x86-based supercomputers and reduce its dependence of foreign silicon. Its first chip, Rhea, uses Arm’s Neoverse V1 CPU.
Coherent Logix Configures Edge AI
The niche embedded-processor supplier is expanding into the commercial edge-AI market. By the end of the year, it plans to tape out its fourth-generation HyperX processor, delivering 6.6 TOPS.

October 12, 2020

Elkhart Lake Has Industrial Aims
Intel’s Atom x6000E moves to 10nm technology and adds embedded features, serving applications beyond those available to its PC-derived predecessors.
Kneron KL720 Boosts Efficiency
Kneron has quickly rolled out its second product, offering a big performance boost to 1.5 TOPS with a power of about 1W, making it well suited to smart cameras and other vision-based edge systems.
Qualcomm Samples First AI Chip
The Cloud AI 100 accelerator provides more than 4x the ResNet-50 inference throughput of an Nvidia T4 card at about the same power. Most initial customers plan to deploy it at the network edge.

October 5, 2020

Ampere Delivers Big Graphics Boost
The new Ampere graphics cards deliver one of Nvidia’s largest generational performance increases ever and offer a second-generation ray-tracing engine that doubles throughput and improves motion blur.
Qualcomm Rolls More Snapdragon 7s
Qualcomm’s new Snapdragon 750G reduces the entry price for mmWave 5G phones. Although it offers greater performance than all 7-series chips other than the 768G, it targets phones that sell for just $400.

More Articles »

Linley Fall Processor Conference

Free Newsletter

Linley Newsletter
Analysis of new developments in microprocessors and other semiconductor products
Subscribe to our Newsletter »

Events

Linley Fall Processor Conference 2020
October 20-22 and 27-29, 2020 (All Times Pacific)
Virtual Event
More Events »

In the Media