» Current | 2022 | 2021 | 2020
Linley Newsletter
Edged.AI Offers Low-Latency IP
August 10, 2021Author: Bob Wheeler
It’s rush hour in the AI market as new entrants race to capture designs in autonomous transportation. Whereas the consumer market for self-driving vehicles is taking longer to develop than many predicted, numerous lower-volume autonomous systems are moving into production. Agricultural machinery and rail transport are two such examples, and their system requirements are distinct from (and simpler than) those of automobiles. Newly incorporated startup Edged.AI seeks to address these applications with deep-learning-accelerator intellectual property (DLA IP) positioned between low-end microcontrollers and high-power GPUs.
Edged brands its product as a tensor processing unit (TPU), adopting Google’s term. It offers TPU IP at two performance levels: 8 TOPS and 32 TOPS of peak INT8 performance. The design uses a single programmable core to deliver its maximum throughput on streaming data, thereby minimizing inference latency. Supplementing its INT8 matrix engine, the DLA includes a vector engine for nonconvolution layers. The startup has already proven its design in FPGAs as well as in a pair of 28nm test chips. It has released RTL to initial customers.
To stand out in the crowded market, Edged plans to differentiate its future products by adding training features. In the meantime, however, it faces direct competition from both startups and established IP vendors for inference DLAs. So far, the young company has made impressive progress, with customers already evaluating its test silicon.
Subscribers can view the full article in the Microprocessor Report.
Subscribe to the Microprocessor Report and always get the full story!