» Current | 2022 | 2021 | 2020
Linley Newsletter
Chipmakers Boost AI Efficiency by 5x
August 24, 2021Author: Aakash Jani
As wafer costs rise sharply, chip designers are working to boost area efficiency and performance. Our analysis of the newest flagship smartphone processors reveals that efficiency increased by up to 5x from generation to generation. We looked at the deep-learning accelerators (DLAs) in five SoCs: the Apple A14, HiSilicon (Huawei) Kirin 9000, MediaTek Dimensity 1200, Qualcomm Snapdragon 888, and Samsung Exynos 2100.
Compared with last year, chip designers increased their DLAs’ die area, both in absolute terms and as a portion of the complete SoC. These devices were therefore able to perform more AI duties locally for greater security and lower latency. Using the extra hardware, the smartphone processors delivered massive performance jumps over the previous generation. Most are following an exponential AI-performance improvement, but MediaTek’s was slower, causing it to lose the performance lead.
For example, Samsung’s DLA expanded by 62% to accommodate another core, but performance rose by 3x on AI-Benchmark Additional components and new AI architectures drove these area increases. Of all the processors, only the Dimensity 1200 reused the previous generation’s architecture.
For this year’s analysis, we again chose AI-Benchmark, which we used last year, but because Apple devices are unable to run that Android-only test, we also added AIMark. In addition to the overall AI-Benchmark score, we examined the INT8, FP16, and LSTM subscores to better identify ideal applications for each DLA.
The new configurations and architectures have caused the AI-Benchmark rankings to shift considerably. Although MediaTek was last year’s leader, it fell behind this year. For AIMark; Samsung makes considerable ground, but still falls short.
Subscribers can view the full article in the Microprocessor Report.
Subscribe to the Microprocessor Report and always get the full story!