Arteris Articles

SemiWiki: Why High-End ML Hardware Goes Custom

Kurt Shuler, VP Marketing at Arteris IP,  provides more insight into what's happening in this highly dynamic space in the latest SemiWiki blog written by Bernard Murphy (SemiWiki):

Why High-End ML Hardware Goes Custom

January 30th, 2019 - By Bernard Murphy

In a hand-waving way it’s easy to answer why any hardware goes custom (ASIC): faster, lower power, more opportunity for differentiation, sometimes cost though price isn’t always a primary factor. But I wanted to do a bit better than hand-waving, especially because these ML hardware architectures can become pretty exotic, so I talked to Kurt Shuler, VP Marketing at Arteris IP, and I found a useful MIT tutorial paper on arXiv. Between these two sources, I think I have a better idea now.

Start with the ground reality. Arteris IP has a bunch of named customers doing ML-centric design, including for example Mobileye, Baidu, HiSilicon and NXP. Since they supply network on chip (NoC) solutions to those customers, they have to get some insight into the AI architectures that are being built today, particularly where those architectures are pushing the envelope. What they see and how they respond in their products is revealing.

You can learn more about what Arteris IP is doing to support AI in these leading-edge ML design teams HERE. They certainly seem to be in a pretty unique position in this area.

 For more information, download this FlexNoC AI Package datasheet; http://www.arteris.com/flexnoc-ai-package

Topics: NoC semiconductor semiwiki kurt shuler AI chips flexnoc ai package accelerators noc interconnect ML-centric design

SemiWiki: Disturbances in the AI Force

Bernard Murphy (SemiWiki) reflects on a discussion with Kurt Shuler, VP Marketing at Arteris IP, on customer trends in design for advanced ML accelerators, why these look quite different from traditional processor architectures and the implication for design particularly around the NoC interconnect in this SemiWiki blog:

Disturbances in the AI Force

January 3rd, 2019 - By Bernard Murphy

In the normal evolution of specialized hardware IP functions, initial implementations start in academic research or R&D in big semiconductor companies, motivating new ventures specializing in functions of that type, who then either build critical mass to make it as a chip or IP supplier (such as Mobileye - initially) or get sucked into a larger chip or IP supplier (such as Intel or ARM or Synopsys). That was where hardware function ultimately settled, and many still do.

But recently the gravitational pull of mega-companies has distorted this normally straightforward evolution. In cloud services this list includes Amazon, Microsoft, Baidu and others. In smartphones you have Samsung, Huawei and Apple - yep, Huawei is ahead of Apple in smartphone shipments and is gunning to be #1. These companies, neither semiconductor nor IP, are big enough to do whatever they want to grab market share. What they do to further their goals in competition with the other giants can have a major impact on the evolution path for IP suppliers.

Arteris IP is closely involved with many of these companies, from Cambricon to Huawei/HiSilicon to Baidu to emerging companies like Lynxi, offering their network on chip (NoC) solutions with the AI package allowing for architecture tuning to the special needs of high-end NN designs. Check out more here; http://www.arteris.com/flexnoc-ai-package

Topics: NoC semiconductor semiwiki kurt shuler AI chips flexnoc ai package hardware ip accelerators noc interconnect

SemiWiki: On-Chip Networks at the Bleeding Edge of ML

On-chip networks become a lot more challenging at the high-end of machine learning (ML). Bernard Murphy (SemiWiki) talked with Kurt Shuler, VP Marketing at Arteris IP, about the experience they have developed over the years of working with well-known ML product builders and how this has influenced  the AI package recently released by Arteris IP in this SemiWiki blog:

On-Chip Networks at the Bleeding Edge of ML 

November 29th,  2018 - By Bernard Murphy

I wrote a while back about some of the more exotic architectures for machine learning (ML), especially for neural net (NN) training in the data center but also in some edge applications. In less hairy applications, we’re used to seeing CPU-based NNs at the low end, GPUs most commonly (and most widely known) in data centers as the workhorse for training, and for the early incarnations of some mobile apps (mobile AR/MR for example), FPGAs in applications where architecture/performance becomes more important but power isn’t super-constrained, DSPs in applications pushing performance per watt harder and custom designs such as the Google TPU pushing even harder.

Topics: SoC NoC FPGAs semiconductor machine learning FlexNoC semiwiki kurt shuler AI chips flexnoc ai package

SemiWiki: Supporting ASIL-D Through Your Network on Chip

Kurt Shuler, VP Marketing at Arteris IP has written a White-Paper 'How to efficiently achieve ASIL-D compliance using NoC technology', and discusses the details with Bernard Murphy in this SemiWiki blog:

Supporting ASIL-D Through Your Network on Chip 

September 20th,  2018 - By Bernard Murphy

ASIL-D compliance for safety (the top-level of safety)  in automotive applications has become much more prominent as a requirement than we might have expected. Bernard Murphy (SemiWiki) provides his take after reading Kurt Shuler’s white-paper on how the NoC interconnect connecting IPs can help meet this goal and why this approach to safety in integration is more efficient than some frequently discussed alternatives.

Topics: SoC NoC semiconductor ISO 26262 ASIL D ISO 26262 certification semiwiki kurt shuler safety culture compliance ASIL-B FMEDA failure mitigation