Semiconductor Engineering: Reducing SoC Power With NoCs And Caches

Andy Nightingale, Oct 31, 2024

Today’s system-on-chip (SoC) designs face significant challenges with respect to managing and minimizing power consumption while maintaining high performance and scalability. Network-on-chip (NoC) interconnects coupled with innovative cache memories can address these competing requirements.

Traditional NoCs

SoCs consist of IP blocks that need to be connected. Early SoCs used bus-based architectures, which worked well for a single initiator and multiple targets but couldn’t scale for systems with multiple initiators due to performance issues with bus arbitration. Later, crossbar switches were introduced, enabling multiple initiators to communicate with multiple targets, but this solution increased area, routing congestion and power consumption.

Today’s high-end SoCs employ chip-spanning intellectual property (IP) in the form of NoCs, in which data from initiator IPs is serialized, packetized and transmitted through the on-chip network to the designated target IPs. Multiple packets can be “in-flight” throughout the NoC at the same time. A NoC-based architecture results in fewer wires that consume less silicon real estate while dramatically reducing routing congestion and power consumption.

Physically-aware NoCs

Despite all the advantages offered by NoCs, each new generation of SoCs tends to employ an increasing number of IP blocks, with modern SoCs often containing hundreds. Furthermore, the IPs themselves can be significantly more complex than their predecessors. In fact, many IP blocks contain their own IPs connected via an internal NoC.

To read the full article on Semiconductor Engineering, click here.

Subscribe to Arteris News