Arteris Future of SoC Design Is Data Movement

Semiconductor Engineering: The Future Of SoC Design Is Data Movement

Arteris Future of SoC Design Is Data Movement

The semiconductor industry is experiencing rapid advances in chiplet adoption, high-bandwidth memory, Compute Express Link (CXL) fabrics, and automotive zonal architectures. As we move into the second half of 2025, the only sustainable path forward is a layered, physically aware, and automated interconnect methodology that can keep pace with escalating complexity.

This article is Part Two of Data Movement Is The Energy Bottleneck Of Today’s SoCs, which states that data movement, not compute, has become the limiting factor for performance and energy efficiency in complex chips. That reality remains true today, with even greater urgency. The blog below provides an update on industry requirements, design consequences, and architectural patterns shaping system-on-chip (SoC) data movement strategies.

Industry requirements and consequences

Emerging technologies are reshaping SoC design, each bringing new requirements and risks. These shifts, from chiplet packaging to off-chip fabrics and automotive safety mandates, place new stress on the interconnect. The result is a greater risk of delays, inefficient bandwidth use, and certification setbacks.

  • Chiplets and Multi-Die Scaling: Scaling beyond a single die has become essential for artificial intelligence (AI) and high-performance computing (HPC). Instead of building one monolithic chip, designers now split systems into multiple dies, often fabricated on different process nodes to optimize yield and cost. The challenge is to ensure that software and workloads continue to experience the system as a single, unified compute environment. Non-Uniform Memory Access (NUMA) is an architecture where access times to memory vary depending on the location of the data relative to the processor(s). Without NUMA-aware coherency, performance varies unpredictably, latency grows erratic, and bandwidth is wasted. This is why the Universal Chiplet Interconnect Express (UCIe) standard is becoming so important. It provides a foundation for die-to-die connectivity, essentially a “PCIe for chiplets.” UCIe reduces fragmentation and ensures that chiplets from different vendors interoperate predictably within the same package. Note, however, that implementation maturity across tools and vendors is still evolving.
  • HBM/DDR Bandwidth: High-bandwidth memory (HBM4) and next-generation DDR provide unprecedented throughput, but raw bandwidth is only part of the equation. Tail latency and head-of-line blocking can waste a significant portion of that capacity if traffic management is not handled correctly. Think of HBM as a multi-lane highway. Without traffic rules, a minor fender bender can bring the entire freeway to a standstill. In SoCs, the equivalent solution is an interconnect with robust quality of service (QoS) features that prioritize latency-sensitive traffic while ensuring that bulk data transfers flow smoothly.
  • CXL 3.x Fabrics: Compute Express Link (CXL) 3.x introduces the ability to pool memory and accelerators beyond the SoC boundary. This promises enormous flexibility for data center workloads but creates new requirements inside the SoC. On-chip fabrics must be able to “speak CXL” through proxy logic, ensuring that external devices exchange data coherently with CPUs and caches. Without this alignment, designers risk protocol mismatches, expensive overprovisioning, and inefficient workarounds.
  • Automotive Zonal Architectures: The automotive industry is transforming from dozens of distributed electronic control units (ECUs) to a smaller number of centralized zonal controllers. These zonal architectures demand safety-certified interconnects that guarantee isolation between functions. For example, changing the radio station cannot, under any circumstances, interfere with braking. Certification to ISO 26262 at the highest ASIL D level requires partitioning, diagnosability, and interference-free operation. Without this, automotive programs can face delays of a year or more, an unacceptable risk in an industry with long product cycles and high safety stakes.
  • Backend Congestion: Traditional buses and crossbars are no longer sustainable. As more IP blocks are added, wire counts grow quadratically, clogging routing resources and creating severe congestion in back-end design flows. This wire bloat forces expensive fixes late in the design cycle. A physically aware network-on-chip (NoC) reduces wire counts early in the flow, easing pressure on routing and preventing schedule slips and missed tape-outs.
  • Integration Complexity: Modern SoCs contain millions of configuration and status registers (CSRs) and thousands of connections. Managing these with spreadsheets and manual flows is unsustainable. The result is an inevitable drift between specification, RTL, and firmware. The industry increasingly turns to machine-readable golden specifications, such as IP-XACT or SystemRDL, that automatically generate RTL, firmware headers, and documentation. This keeps all representations synchronized, reduces errors, and accelerates time to market.

To read the full article on Semiconductor Engineering, click here.