AI applications are becoming critical in our hyperconnected world, driven by the growing demand for generative AI. According to Futurum Intelligence, the AI chipset market for processors and accelerators in the data center segment was valued at $38 billion in 2023 and is projected to grow at a 30% CAGR, reaching $138 billion by 2028. This surge is fueled by the rapid adoption of generative AI models and chipmakers’ focus on optimizing AI inferencing chip support.
GPUs dominate the data center AI market, holding 92% of the market share and accounting for 74% of AI chipsets used in data centers. The GPU market, led by companies like Nvidia, is expected to grow at a 30% CAGR, reaching $102 billion by 2028. CPUs also play a crucial role in AI processing, with a 28% CAGR forecast, growing from $7.7 billion in 2023 to $26 billion by 2028. Hyperscale cloud providers, such as Google and AWS, are significant drivers of AI chipset purchases, accounting for 43% of the market in 2023, with a forecast to grow to 50% by 2028. Custom cloud accelerators, including Google TPU and AWS Trainium, are increasingly essential in scaling AI infrastructure.
As the demand for AI chipsets grows, key development challenges arise, including balancing power consumption and performance, managing limited memory and processing resources, addressing data security and privacy concerns, and optimizing connectivity and communication protocols. Additionally, deploying AI algorithms on low-power devices requires careful algorithmic optimization to ensure efficient performance.
Different architectures are being explored to achieve the best performance per watt. While CPUs and GPUs are expected at the low end, more advanced custom designs, such as Google TPU, are deployed for high-end applications. However, these architectures push chip size and complexity, creating potential issues with timing closure, traffic congestion, and power management. As a result, on-chip networks for machine learning are becoming increasingly important, helping architects maximize performance and efficiency while managing the spatial distribution of machine learning workloads.
Arteris, which has long supported edge AI inferencing, offers solutions like the FlexNoC XL Option, which allows developers to create flexible network architectures. These solutions address critical timing closure issues, bandwidth demands, and memory controller utilization challenges, which are key to achieving high performance in increasingly complex systems.
Key Benefits
Scalability
Create highly scalable Ring, Mesh, and Torus topologies with highly efficient unlike black box compiler approaches, SoC architects can edit generated topologies and also optimize each individual network router, if desired.
Bandwidth
Increase on-chip and off-chip bandwidth with HBM2 and multichannel memory support, multicast/broadcast writes, VC-Link™ Virtual Channels, and source-synchronous communications.
Low Power
Fewer wires and fewer gates consume less power, breaking communication paths into smaller segments allows to power only active segments, and simple internal protocol allows aggressive clock gating.
Newest Innovation
NoC tiling is an emerging trend in SoC design that uses proven, robust network-on-chip IP to facilitate scaling, condense design time, speed testing and reduce design risk.
It empowers SoC architects to quickly create modular, scalable AI designs, enabling faster integration, verification and optimization.
NoC tiling advantages – available with FlexNoC and Ncore
- Scalable Performance
- Power Reduction
- Dynamic Reuse
Automotive
Communications
Consumer Electronics
Enterprise Computing
Industrial
Artificial Intelligence / Machine Learning
Edge computing inference applications
Focus on modularity and hierarchical design, easier routing and layout plus area and power efficiency
Edge computing training and inference application data exchange
Focus on scalability, easier routing and layout
Embedded computing inference applications
Focus on area and power efficiency
Enterprise-level training applications
Focus on scalability and hierarchical design, easier routing and layout
Edge computing inference applications
Focus on scalability, easier routing and layout.
Artificial Intelligence / Machine Learning
Automotive
Edge computing inference applications
Focus on modularity and hierarchical design, easier routing and layout plus area and power efficiency
Communications
Edge computing training and inference application data exchange
Focus on scalability, easier routing and layout
Consumer Electronics
Embedded computing inference applications
Focus on area and power efficiency
Enterprise Computing
Enterprise-level training applications
Focus on scalability and hierarchical design, easier routing and layout
Industrial
Edge computing inference applications
Focus on scalability, easier routing and layout.
Partners
Andes Technology is a founding and premier member of RISC-V International and a leading supplier of high-performance/low-power RISC-V processor IP. Andes Technology and Arteris partner to advance innovation for RISC-V based SoC designs for AI, 5G, networking, mobile, storage, AIoT and space applications. The Andes QiLai RISC-V platform is a development board with a QiLai SoC featuring the Andes’ RISC-V processor IPs along with Arteris FlexNoC interconnect IP used for on-chip connectivity.
In March 2024, Arteris delivered on its previously announced collaboration with Arm to speed up automotive electronics innovation with an emulation-based validation system for Armv9 and CHI-E-based designs to speed up innovation in automotive electronics for autonomous driving, advanced driver-assistance systems (ADAS), cockpit and infotainment, vision, radar and lidar, body and chassis control, zonal controllers and other automotive applications. Arteris aligned its roadmap with Arm to enable designers to get to market faster with an optimized and pre-validated high-bandwidth, low-latency Ncore cache coherent interconnect IP for Arm’s Automotive Enhanced (AE) compute portfolio. The partnership helps customers realize SoCs with high performance and power efficiency for safety-critical tasks while reducing project schedules and costs. It offers mutual customers a greater choice of safe, integrated, and optimized automotive solutions to enable faster time to market via seamless integration and optimized flows with the highest quality of results, enabling ISO 26262 systems with the highest automotive safety integrity levels (ASIL).
The Damo Wujian Alliance, spearheaded by Damo Academy (an affiliate of Alibaba Group), is an ecosystem alliance driving the adoption and development of the RISC-V instruction-set architecture. The coalition focuses on high-performance System-on-Chip (SoC) designs, particularly in edge AI computing. As part of the alliance, Arteris plays a pivotal role by enabling the integration of Damo Academy’s / T-Head’s Xuantie RISC-V processor IP cores with its Ncore cache coherent network-on-chip (NoC) system IP, resulting in efficient data transport architectures within cores and between chips, enabling cutting-edge applications in AI, machine learning, and more.
Fraunhofer IESE is one of 76 institutes and research units of the Fraunhofer-Gesellschaft. Together they have a major impact on shaping applied research in Europe and contribute to Germany’s competitiveness in international markets. Our partnership with Fraunhofer enables early architecture analysis of DRAM performance effects on network-on-chip performance through connections between Ncore and FlexNoC SystemC simulation and the DRAMSys DRAM modeling and simulation framework.
Arteris and SiFive have partnered to accelerate the development of edge AI SoCs for consumer electronics and industrial applications. The partnership combines SiFive’s multi-core RISC-V processor IP and Arteris’ Ncore cache coherent interconnect IP, providing high performance and power efficiency with reduced project schedules and integration costs. The collaboration has led to the development of the SiFive 22G1 X280 Customer Reference Platform, incorporating SiFive X280 processor IP and Arteris Ncore cache coherent interconnect IP on the AMD Virtex UltraScale+ FPGA VCU118 Evaluation Kit.
Semidynamics is a provider of fully customizable RISC-V processor IP and specializes in high bandwidth, high-performance cores with Vector Units, Tensor Units and Gazzillion, and targeted at machine learning and AI applications. Our collaboration enhances the flexibility and highly configurable interoperability of processor IP with system IP, aiming to deliver Integrated and optimized solutions with focus on accelerating artificial intelligence, machine learning and high-performance computing (HPC) applications.
Products
- Video: Scaling Performance In AI Systems | Semiconductor Engineering
- Technical Papers
- Presentations
- The Role of Networks-on-Chips Enabling AI/ML Silicon and Systems | AI Everywhere 2023
- Customers
- SiMa.ai case study – Push-Button Ease of Arteris FlexNoC Freed Up the Team to Focus on Designing The World’s First Machine Learning SoC
- Sondrel case study – Shortening Leading-Edge ADAS Design Cycles With FlexNoC To Deliver Customer Success
- Articles
- Press Releases
- Arteris Deployed by Menta for Edge AI Chiplet Platform | Dec 03,2024
- Arteris and MIPS Partner on High-Performance RISC-V SoCs for Automotive, Datacenter and Edge AI | Nov 12, 2024
- Arteris Selected by TIER IV for Intelligent Vehicles | Oct 29, 2024
- Arteris and SiFive Deliver Pre-verified Solution for the Datacenter Market | Oct 21, 2024
- Arteris Network-on-Chip Tiling Innovation Accelerates Semiconductor Designs for AI Applications | Oct 15, 2024
- Arteris Selected by Esperanto Technologies to Integrate RISC-V Processors for High-Performance AI and Machine Learning Solutions | Jun 11, 2024
- Andes Technology and Arteris Partner To Accelerate RISC-V SoC Adoption | May 22, 2024
- Rebellions Selects Arteris for Its Next-Generation Neural Processing Unit Aimed at Generative AI | Apr 09, 2024
- Arteris Expands Automotive Solutions for Armv9 Architecture CPUs | Mar 13, 2024
- EdgeQ Deploys Arteris IP for its 5G+AI Base Station-on-a-Chip for Wireless Infrastructure | Feb 13, 2024
- Arteris Selected by Rain AI for Use in the Next Generation of AI | Jan 30, 2024
- Semidynamics and Arteris Partner To Accelerate AI RISC-V System-on-Chip Development | Nov 02, 2023
- Fraunhofer IESE Partners With Arteris To Accelerate Advanced Network-on-chip Architecture Development for AI/ML Applications | Oct 17, 2023
- Arteris Interconnect IP Deployed in NeuReality Inference Server for Generative AI and Large Language Model Applications | Oct 10, 2023
- Tenstorrent Selects Arteris IP for AI High-Performance Computing and Datacenter RISC-V Chiplets | May 02, 2023
- Arteris IP Licensed by Axelera AI to Accelerate Computer Vision at the Edge | Apr 26, 2023
- Arteris IP Selected By ASICLAND for Automotive, AI Enterprise and AI Edge SoCs | Apr 12, 2023
- Arteris and SiFive Partner to Accelerate RISC-V SoC Design of Edge AI Applications | Feb 27, 2023
- Arteris Collaborates with SiMa.ai to Optimize ML Implementation With Efficient Topology Interconnect IP for the Embedded Edge | Aug 30, 2022
- Arteris IP FlexNoC Interconnect and Resilience Package Licensed in Neural Network Accelerator Chip Project Led by BMW Group | Apr 5, 2022
- Arteris FlexNoC Interconnect Licensed by Eyenix for AI-Enabled Imaging/Digital Camera SoC | Oct 19, 2021
- Arteris FlexNoC Interconnect Licensed for use in SK Telecom SAPEON AI Chips | Sep 7, 2021
- Arteris IP FlexNoC Interconnect and Resilience Package Licensed by Hailo for Artificial Intelligence (AI) Chip | Jan 12, 2021
- Arteris FlexNoC Interconnect and AI Package Licensed by Blue Ocean Smart System for AI Chips | May 19, 2020
- Arteris FlexNoC Interconnect and AI Package Licensed by Vastai Technologies for Artificial Intelligence Chips | Jan 21, 2020
- Arteris Ncore Cache Coherent Interconnect Licensed by Bitmain for Sophon TPU Artificial Intelligence (AI) Chips | Jul 9, 2019