Arteris Articles

SemiWiki: Intelligence in the Fog

Kurt Shuler, VP Marketing at Arteris IP, and Bernard Murphy (SemiWiki) discuss the hottest domains in tech today - AI and automotive in this new SemiWiki blog:

Intelligence in the Fog

June 12, 2019 - By Bernard Murphy

AI is creeping into places we might not expect, such as communication infrastructure. Bernard Murphy learns from Kurt Shuler how AI and AI-centric design methods are becoming more important in this surprising domain.

By now, you should know about AI in the cloud for natural language processing, image ID, recommendation, etc, etc (thanks to Google, Facebook, AWS, Baidu and several others) and AI on the edge for collision avoidance, lane-keeping, voice recognition and many other applications. But did you know about AI in the fog? First, a credit – my reference for all this information is Kurt Shuler, VP Marketing of Arteris IP. I really like working with these guys because they keep me plugged in to two of the hottest domains in tech today – AI and automotive. That and the fact that they’re really the only game in town for a commercial NoC solution, which means that pretty much everyone in AI, ADAS and a bunch of other fields (e.g. storage) is working with them.

For more information, please visit the Arteris IP AI package webpage: http://www.arteris.com/flexnoc-ai-package

Topics: SoC semiconductor automotive ADAS artificial intelligence semiwiki kurt shuler flexnoc ai package noc interconnect

SemiWiki: ML and Memories: A Complex Relationship

Kurt Shuler, VP Marketing at Arteris IP, helped Bernard Murphy (SemiWiki) learn the multiple ways that different types of memory need to connect to these accelerators in the latest SemiWiki blog:

ML and Memories: A Complex Relationship

March 13th, 2019 - By Bernard Murphy

How do AI architectures connect with memories? The answer is more complex than in conventional SoC architectures.

No, I’m not going to talk about in in-memory-compute architectures. There’s interesting work being done there but here I’m going to talk here about mainstream architectures for memory support in Machine Learning (ML) designs. These are still based on conventional memory components/IP such as cache, register files, SRAM and various flavors of off-chip memory, including not yet “conventional” high-bandwidth memory (HBM). However, the way these memories are organized, connected and located can vary quite significantly between ML applications.

For more information, please visit the Arteris IP AI package webpage: http://www.arteris.com/flexnoc-ai-package

Topics: semiconductor artificial intelligence semiwiki kurt shuler flexnoc ai package noc interconnect cache coherence

Arteris IP Awarded 1st Place for Technical Paper at Synopsys Users Group (SNUG) Silicon Valley 2019

Benny Winefeld, Solutions Architect at Arteris IP, Awarded 1st Place Best Paper Award at SNUG Silicon Valley 2019 

Arteris IP presented this technical paper, "Using Machine Learning for Characterization of NoC Components", on March 20, 2019.

Benny Winefeld, Solutions Architect at Arteris IP, accepted the 1st Place Best Paper Award from the SNUG Technical Committee during SNUG Silicon Valley. There were 29 papers that competed for the best paper award.

In the photo above, Benny receives the award from the SNUG committee, from left to right: Ken Nelson, VP Field Support Operations; Benny Winefeld, Solutions Architect, Arteris IP; Tony Todesco, SNUG SV Technical Chair, AMD; and Deirdre Hanford, Co-GM, Synopsys.

Topics: Synopsys NoC machine learning artificial intelligence Soft IP noc interconnect SNUG

Arteris IP is Presenting at The Linley Spring Processor Conference April 10 - 11, 2019!


Don't Miss the Arteris IP Presentation on AI SoC Architectures, Thursday, April 11, 2019 

Location: Hyatt Regency, Santa Clara, CA
Session 5: SoC Design: Thursday, April 11
1:15 pm - 2:45 pm

Arteris IP presenting: "Adapting SoC Architectures for Types of Artificial-Intelligence Processing"

Come to the Linley Spring Processor Conference on April 10 - 11, 2019  - and attend the Arteris IP presentation on Thursday, April 11 during Session 5: SoC Design, were we will describe lessons learned on how to use network-on-chip (NoC) technology to efficiently implement SoC architectures targeted for different types of AI processing, including advanced techniques like when to use tiling or cache coherence, whether for edge/battery-operated or datacenter chips. 

April 11 Agenda: https://www.linleygroup.com/events/agenda.php?num=46&day=2

Topics: NoC semiconductor ArterisIP artificial intelligence SoCs edge/battery-operated cache coherence datacenter chips