Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

AI / Machine Learning

Accelerating AI chiplet and multi-die chip development from data center to smart edge devices with high performance, energy-efficient data transport, flexible network-on-chip (NoC) interconnects, and automated SoC integration automation — enabling faster, lower-risk innovation.

Overview

Overview

AI applications are becoming critical in our hyperconnected world. Deloitte estimates the annual AI semiconductor revenue to be ~$400B by 2027, while AMD projects it will be ~$500B by 2028; some other reports (e.g., Roots Analysis) even project ~$847B by 2035. This surge is fueled by the rapid growth in AI data centers and smart edge devices, with a higher CAGR for AI inferencing chips.

As the demand for AI chipsets grows, key development challenges arise, including balancing power consumption and performance, managing limited memory and processing resources, addressing data security and privacy concerns, and optimizing connectivity and communication protocols, with significant differences between the needs in the data center versus on the edge.

AI chip

Arteris, with its NoC technology optimized for highest bandwidth, lowest latency, lowest power, chiplets and multi-die SoCs, is the natural choice for the full spectrum of AI computing. From the data center to edge applications, Arteris successfully helps architects maximize performance and efficiency while managing the spatial distribution of AI workloads, as demonstrated by the hundreds of silicon-proven designs.

AI Data Center

AI infrastructure computing in the data center or cloud, is designed for massive data computation, movement, and storage to support the needs of AI workloads.

For example, models for Natural Language Processing (NLP) and Large Language Models (LLMs) such as GPT-5, require massive NLP computing to train on enormous datasets, often with the help of thousands of GPUs via massive parallelism. Frontier LLMs reach the 1–10T+ parameter range; NVIDIA positions Blackwell for real-time inference up to 10T parameters.

Batch Inference, particularly for applications like non-interactive research, can tolerate longer latencies—the time it takes to go from inquiry to results—and is an excellent fit for AI infrastructure computing.

Key Care-Abouts:

  • Performance, specifically throughput (PFLOPs) and bandwidth (TB/s)
  • Scale-up and scale-out, spanning thousands of nodes with UALink, NVLink, Ultra Ethernet, etc.
  • Latency between CPU clusters and HBM or DDR memory with JEDEC releasing HBM4 in Apr 2025, supporting up to ~2 TB/s per stack
  • TCO (Total Cost of Ownership), often strongly related to MegaWatts (MW) and up to GigaWatts (GW) of energy consumption. Modern AI racks (e.g., GB200 NVL72) draw ~120–132 kW per rack, and hyperscale campuses are planning hundreds of MW to multi-GW sites.
  • XPU support across GPUs, TPUs, DPUs, IPUs, AI Accelerators, high-end CPUs, and the underlying data movement across these and high-bandwidth memory (e.g. HBM4)
data center

Edge AI

AI at the edge means that data is processed closer to its source—on devices like IoT sensors, smartphones, cars, robots, or local gateways. This setup reduces latency significantly, enabling immediate decision-making with real-time processing. For example, cars from companies like Tesla and Waymo use edge AI to process data from sensors in real-time, allowing for split-second decisions on the road without relying on central data centers. Round-trip network times of ~60–70 ms (typical of 4G and observed medians on some 5G networks) are too high for mission-critical control applications such as autonomous cars or robots, motivating on-device or near-edge processing despite ongoing 5G/6G advances.

Key Care-Abouts:

  • Ultra-low latency allows real-time decisions without cloud access
  • Energy efficiency is used to support inference at the mW to W range, often on rechargeable batteries
  • Form factors and cost trade-offs are associated with endpoint devices
  • Trade-offs in performance and bandwidth should accommodate the above
  • XPU support across NPUs, Edge TPUs, MCUs, Embedded FPGAs, and Embedded GPU
edge AI
Advantages

Advantages

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Performance and Bandwidth

Increase chiplet and multi-die SoC bandwidth with HBM and multichannel memory support, multicast/broadcast writes, VC‑Link™ Virtual Channels, and source‑synchronous communications.

Scalability

Create highly scalable ring, mesh, and torus topologies with highly efficient approaches—unlike black box compiler. SoC architects can edit generated topologies and optimize each individual network router, if desired. Support for scale-up protocols like UALink, and broad ecosystem.

Energy Efficiency, Low Power

Fewer wires and fewer gates consume less power; breaking communication paths into smaller segments allows powering only active segments, and a simple internal protocol enables aggressive clock gating, supporting lower TCO for data centers and better edge battery life.

Innovations

Featured Innovations

AI Chiplets and Multi-Die Designs

The insatiable drive toward AI-driven computing in semiconductor devices is propelling adoption of multi-die systems and chiplet-based designs to satisfy the escalating demands of today’s complex, high-performance computing and AI workloads.

Arteris accelerates AI-driven silicon innovation with its expanded multi-die solution, which delivers flexible design scalability, differentiated AI performance, alignment with evolving industry standards, silicon-proven chiplets, and broad ecosystem support.

Network‑on‑Chip AI Tiling to accelerate semiconductor designs for AI applications

AI Tiling is an emerging trend in chiplet and SoC design that uses proven, robust network‑on‑chip IP to facilitate scaling, condense design time, speed testing, and reduce design risk.

It empowers SoC architects to quickly create modular, scalable AI designs, enabling faster integration, verification, and optimization across non-coherent and coherent data traffic.

AI tiling advantages – available with FlexGen, FlexNoC and Ncore NoC IPs

  • Scalable Performance
  • Power and Area Reduction
  • Dynamic Reuse and Productivity
  • Supports Arm, RISC-V, x86, and Mixed Architectures
AI tiling

Physical Awareness

Majority of modern AI hardware relies on advanced nodes, from 5nm and below on the edge to 3nm, 2nm, and increasingly Angstroms in data center designs. In such design, particularly with the massive underlying AI data movement, wires and physical closure, including timing closure, become increasingly challenging, often resulting in missed silicon schedules, re-designs, or both.

Arteris provides physically aware NoCs, enabling SoC architecture teams, logic designers and integrators to account for physical constraint management across power, performance and area (PPA) early in the design cycle. This results in 5X faster physical convergence over manual refinements with fewer iterations from back-end physical design team, with the resulting physically optimized NoC IPs being ready for output to physical synthesis and place and route for implementation without further re-design of the overall chiplet and multi-die SoC.

physical awareness
background-element-19
AI Tiling

Network‑on‑Chip AI Tiling to accelerate semiconductor designs for AI applications

AI Tiling is an emerging trend in chiplet and SoC design that uses proven, robust network‑on‑chip IP to facilitate scaling, condense design time, speed testing, and reduce design risk. It empowers SoC architects to quickly create modular, scalable AI designs, enabling faster integration, verification, and optimization, across non-coherent and coherent data traffic. AI tiling advantages – available with FlexGen, FlexNoC and Ncore NoC IPs
  • Scalable Performance
  • Power and Area Reduction
  • Dynamic Reuse and Productivity
  • Supports Arm, RISC-V, x86, and Mixed Architectures
AI tiling
Lorem Ipsum

Arteris NoC Tiling Innovation

Addresses AI Use Cases Across Markets

Artificial Intelligence / Machine Learning

Andes Technology is a founding and premier member of RISC-V International and a leading supplier of high-performance/low-power RISC-V processor IP. Andes Technology and Arteris partner to advance innovation for RISC-V based SoC designs for AI, 5G, networking, mobile, storage, AIoT and space applications. The Andes QiLai RISC-V platform is a development board with a QiLai SoC featuring the Andes’ RISC-V processor IPs along with Arteris FlexNoC interconnect IP used for on-chip connectivity.

andes technology logo

In March 2024, Arteris delivered on its previously announced collaboration with Arm to speed up automotive electronics innovation with an emulation-based validation system for Armv9 and CHI-E-based designs to speed up innovation in automotive electronics for autonomous driving, advanced driver-assistance systems (ADAS), cockpit and infotainment, vision, radar and lidar, body and chassis control, zonal controllers and other automotive applications. Arteris aligned its roadmap with Arm to enable designers to get to market faster with an optimized and pre-validated high-bandwidth, low-latency Ncore cache coherent interconnect IP for Arm’s Automotive Enhanced (AE) compute portfolio. The partnership helps customers realize SoCs with high performance and power efficiency for safety-critical tasks while reducing project schedules and costs. It offers mutual customers a greater choice of safe, integrated, and optimized automotive solutions to enable faster time to market via seamless integration and optimized flows with the highest quality of results, enabling ISO 26262 systems with the highest automotive safety integrity levels (ASIL).

ARM logo

The Damo Wujian Alliance, spearheaded by Damo Academy (an affiliate of Alibaba Group), is an ecosystem alliance driving the adoption and development of the RISC-V instruction-set architecture. The coalition focuses on high-performance System-on-Chip (SoC) designs, particularly in edge AI computing. As part of the alliance, Arteris plays a pivotal role by enabling the integration of Damo Academy’s / T-Head’s Xuantie RISC-V processor IP cores with its Ncore cache coherent network-on-chip (NoC) system IP, resulting in efficient data transport architectures within cores and between chips, enabling cutting-edge applications in AI, machine learning, and more.

XuanTie Damo Academy logo

Fraunhofer IESE is one of 76 institutes and research units of the Fraunhofer-Gesellschaft. Together they have a major impact on shaping applied research in Europe and contribute to Germany’s competitiveness in international markets. Our partnership with Fraunhofer enables early architecture analysis of DRAM performance effects on network-on-chip performance through connections between Ncore and FlexNoC SystemC simulation and the DRAMSys DRAM modeling and simulation framework.

Fraunhofer IESE logo

Arteris and SiFive have partnered to accelerate the development of edge AI SoCs for consumer electronics and industrial applications. The partnership combines SiFive’s multi-core RISC-V processor IP and Arteris’ Ncore cache coherent interconnect IP, providing high performance and power efficiency with reduced project schedules and integration costs. The collaboration has led to the development of the SiFive 22G1 X280 Customer Reference Platform, incorporating SiFive X280 processor IP and Arteris Ncore cache coherent interconnect IP on the AMD Virtex UltraScale+ FPGA VCU118 Evaluation Kit.

SiFive logo

Semidynamics is a provider of fully customizable RISC-V processor IP and specializes in high bandwidth, high-performance cores with Vector Units, Tensor Units and Gazzillion, and targeted at machine learning and AI applications. Our collaboration enhances the flexibility and highly configurable interoperability of processor IP with system IP, aiming to deliver Integrated and optimized solutions with focus on accelerating artificial intelligence, machine learning and high-performance computing (HPC) applications.

Semidynamics logo

Products

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Customer Testimonials

Trusted by innovative
companies everywhere

Resources

Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an printer.

Technical Paper
Security in Artificial Intelligence

Unlock insights into AI security challenges & protection strategies. Dive deep into the dual role of AI in cybersecurity.

background-element-15
What's New

Latest News

Popup Overlay