
SemiWiki: On-Chip Networks at the Bleeding Edge of ML

- Kurt Shuler
- < 1 min read
On-chip networks become a lot more challenging at the high-end of machine learning (ML). Bernard Murphy (SemiWiki) talked with Kurt Shuler, VP Marketing at Arteris IP, about the experience they have developed over the years of working with well-known ML product builders and how this has influenced the AI package recently released by Arteris IP in this SemiWiki blog:
On-Chip Networks at the Bleeding Edge of ML
November 29th, 2018 – By Bernard Murphy
I wrote a while back about some of the more exotic architectures for machine learning (ML), especially for neural net (NN) training in the data center but also in some edge applications. In less hairy applications, we’re used to seeing CPU-based NNs at the low end, GPUs most commonly (and most widely known) in data centers as the workhorse for training, and for the early incarnations of some mobile apps (mobile AR/MR for example), FPGAs in applications where architecture/performance becomes more important but power isn’t super-constrained, DSPs in applications pushing performance per watt harder and custom designs such as the Google TPU pushing even harder.
To read the entire article, please click here:
https://www.semiwiki.com/forum/content/7860-chip-networks-bleeding-edge-ml.html