Accelerating Edge AI Developments With Flexible Network-On-Chip (NoC)
Architectures, Timing Closure, and Bandwidth Optimization
Edge AI applications are rapidly becoming critical for our hyperconnected world. According to MarketsandMarkets, the AI chipset market is expected to grow to $57.8B in 2026 with a 40.1% CAGR from 2020. The recent skyrocketing demand for Large Language Model AI, like OpenAI’s ChatGPT, has triggered a race for optimized Edge AI hardware for inferencing. Key development challenges include balancing power consumption and performance, dealing with limited memory and processing resources, handling data security and privacy concerns, managing connectivity and communication protocols, and optimizing algorithms for deployment on low-power devices.
Different architectures are being experimented with to find the best performance per watt for their application. On the low end, CPUs and GPUs are commonly used, while at the high end, custom designs such as the Google TPU are being used. These architectures naturally push chip size, which can create issues in getting to timing closure and potentially massive traffic congestion and power problems. In that context, on-chip networks for machine learning are becoming increasingly important as architects seek to maximize performance and power while exploiting the spatially distributed nature of ML algorithms.
Arteris has been working with customers using ML technology for years, supporting near real-time inferencing at the edge. The FlexNoC XL Option for large systems enables developers to create flexible network architectures (both logical and physical) while preserving the benefits of automating generation, support for globally asynchronous and locally synchronous design techniques to address timing closure issues, and methods to maintain high utilization of all connections into the memory controller to address the bandwidth considerations.
Create highly scalable Ring, Mesh, and Torus topologies with highly efficient unlike black box compiler approaches, SoC architects can edit generated topologies and also optimize each individual network router, if desired.
Increase on-chip and off-chip bandwidth with HBM2 and multichannel memory support, multicast/broadcast writes, VC-Link™ Virtual Channels, and source-synchronous communications.
Fewer wires and fewer gates consume less power, breaking communication paths into smaller segments allows to power only active segments, and simple internal protocol allows aggressive clock gating.
Arteris and SiFive have partnered to accelerate the development of edge AI SoCs for consumer electronics and industrial applications. The partnership combines SiFive's multi-core RISC-V processor IP and Arteris' Ncore cache coherent interconnect IP, providing high performance and power efficiency with reduced project schedules and integration costs. The collaboration has led to the development of the SiFive 22G1 X280 Customer Reference Platform, incorporating SiFive X280 processor IP and Arteris Ncore cache coherent interconnect IP on the AMD Virtex UltraScale+ FPGA VCU118 Evaluation Kit.
Arteris Headquarters
595 Millich Dr Suite 200
Campbell, CA 95008 USA
+1 408 470 7300
Contact: info@arteris.com
Arteris Austin
9601 Amberglen Blvd, Suite 117
Austin, TX 78729 USA
Arteris Montigny
2 Rue George Stephenson
78180 Montigny le Bretonneux, France
+33 1 61 37 38 40
Arteris Nice
240 Rue Evariste Galois
06410 Biot, France
Arteris Paris
251 Rue du Faubourg Saint-Martin
75010 Paris, France
+33 1 40 21 35 50
Arteris Nanjing
Room 2708, Yunfeng Mansion
No. 8 Zhongshan North Road
Nanjing, China, 210038
安通思半导体技术(南京)有限公司
南京市鼓楼区中山北路8号云峰大厦2708室
邮编 210008
+86 25 58355692
Arteris Seoul
U-Space 2B, #1001,
670, Daewangpangyo-ro Bundang-gu,
Seongnam-si, Gyeonggi-do,
Seoul 13494 Korea
+82 (10) 4704 9526
Arteris Tokyo
#402, 2-10-15 Shibuya, Shibuya-ku
Tokyo, Japan 150-0002
+81 3 4405 0399
Need help? Visit the helpdesk
Semiconductor IP: help@arteris.com
SoC Integration: mds_service_desk@arteris.com