Generating mesh, ring, and torus topologies for AI, DNN, and ML designs poses challenges in scalability, latency, fault tolerance, and data communication efficiency.
The XL Option automatically generates these interconnect topologies. Unlike black box compiler approaches, SoC architects can edit generated topologies and optimize each individual network router.
Additionally, it also expands the number of NoC initiators and targets, allowing for the integration of a larger number of IP blocks and components within the system ensuring seamless integration of diverse functionalities.
Enables designers to handle higher data volumes and optimize system performance
Create highly scalable Ring, Mesh and Torus topologies.
Efficiently span long distances across huge chips.
Increase on-chip and off-chip bandwidth.
Via mesh-based interconnect generation.
Leads to optimized system functionality.
Improves overall system performance and ensures smooth operation even in highly interconnected designs.
Improves system performance and accelerating data-intensive computations.
Enhances the system's ability to handle large amounts of data, leading to improved performance.
Read more about why we are unique on our NoC Technology page.
Arteris FlexNoC interconnect IP is the only interconnect that would allow our AI chips to achieve their high bandwidth requirements while also meeting our QoS requirements. Using Arteris NoC technology allows our architecture
to take maximum advantage of state-of-the-art HBM2 memories to avoid system-level data starvation,
which is major problem with less efficient AI training chips.