Kurt Shuler, VP Marketing at Arteris IP, helped Bernard Murphy (SemiWiki) learn the multiple ways that different types of memory need to connect to these accelerators in the latest SemiWiki blog:
ML and Memories: A Complex Relationship
March 13th, 2019 - By Bernard Murphy
How do AI architectures connect with memories? The answer is more complex than in conventional SoC architectures.
No, I’m not going to talk about in in-memory-compute architectures. There’s interesting work being done there but here I’m going to talk here about mainstream architectures for memory support in Machine Learning (ML) designs. These are still based on conventional memory components/IP such as cache, register files, SRAM and various flavors of off-chip memory, including not yet “conventional” high-bandwidth memory (HBM). However, the way these memories are organized, connected and located can vary quite significantly between ML applications.
To read the entire article, please click here: https://www.semiwiki.com/forum/content/8131-ml-memories-complex-relationship.html