Overview

AI applications are rapidly becoming critical for our hyperconnected world. According to MarketsandMarkets, the AI chipset market is expected to grow to $57.8B in 2026 with a 40.1% CAGR from 2020. The recent skyrocketing demand for Large Language Model AI, like OpenAI’s ChatGPT, has triggered a race for optimized Edge AI hardware for inferencing. Key development challenges include balancing power consumption and performance, dealing with limited memory and processing resources, handling data security and privacy concerns, managing connectivity and communication protocols, and optimizing algorithms for deployment on low-power devices.

Different architectures are being experimented with to find the best performance per watt for their application. On the low end, CPUs and GPUs are commonly used, while at the high end, custom designs such as the Google TPU are being used. These architectures naturally push chip size, which can create issues in getting to timing closure and potentially massive traffic congestion and power problems. In that context, on-chip networks for machine learning are becoming increasingly important as architects seek to maximize performance and power while exploiting the spatially distributed nature of ML algorithms.

Arteris has been working with customers using ML technology for years, supporting near real-time inferencing at the edge. The FlexNoC XL Option for large systems enables developers to create flexible network architectures (both logical and physical) while preserving the benefits of automating generation, support for globally asynchronous and locally synchronous design techniques to address timing closure issues, and methods to maintain high utilization of all connections into the memory controller to address the bandwidth considerations.

 

Key Benefits

Scalability

Scalability

Create highly scalable Ring, Mesh, and Torus topologies with highly efficient unlike black box compiler approaches, SoC architects can edit generated topologies and also optimize each individual network router, if desired.

Bandwidth

Bandwidth

Increase on-chip and off-chip bandwidth with HBM2 and multichannel memory support, multicast/broadcast writes, VC-Link™ Virtual Channels, and source-synchronous communications.

Low Power

Low Power

Fewer wires and fewer gates consume less power, breaking communication paths into smaller segments allows to power only active segments, and simple internal protocol allows aggressive clock gating.

Partner

Arteris and SiFive have partnered to accelerate the development of edge AI SoCs for consumer electronics and industrial applications. The partnership combines SiFive’s multi-core RISC-V processor IP and Arteris’ Ncore cache coherent interconnect IP, providing high performance and power efficiency with reduced project schedules and integration costs. The collaboration has led to the development of the SiFive 22G1 X280 Customer Reference Platform, incorporating SiFive X280 processor IP and Arteris Ncore cache coherent interconnect IP on the AMD Virtex UltraScale+ FPGA VCU118 Evaluation Kit.

 

Semidynamics is a provider of fully customizable RISC-V processor IP and specializes in high bandwidth, high-performance cores with Vector Units, Tensor Units and Gazzillion, and targeted at machine learning and AI applications. Our collaboration enhances the flexibility and highly configurable interoperability of processor IP with system IP, aiming to deliver Integrated and optimized solutions with focus on accelerating artificial intelligence, machine learning and high-performance computing (HPC) applications.

Partner
Customers