Arteris Press Releases

Arteris® IP FlexNoC® Interconnect and AI Package Licensed by Vastai Technologies for Artificial Intelligence Chips

State-of-the-art Network-on-Chip (NoC) interconnect enables faster performance and shorter development time

CAMPBELL, Calif. — January 21, 2020 — Arteris IP, the world’s leading supplier of innovative, silicon-proven network-on-chip (NoC) interconnect intellectual property, today announced that Vastai Technologies has licensed Arteris FlexNoC Interconnect IP and the accompanying AI Package for use in its next-generation artificial intelligence and computer vision systems-on-chip (SoCs).

We chose Arteris IP because of their excellent reputation and the maturity of their NoC IP. Using Arteris IP reduced our product costs and shortened our development schedule while allowing us to achieve better performance than we thought was possible. In addition, the Arteris IP team has exceeded our expectations for local technical support and engineering expertise.”


John Qian, CEO, Vastai Technologies

Topics: SoC NoC new customer performance AI chips ML/AI scalable hardware on-chip bandwidth

Arteris IP Ncore® Cache Coherent Interconnect Licensed by Bitmain for Sophon TPU Artificial Intelligence (AI) Chips

Network-on-chip (NoC) interconnect enables faster performance and lower die area for Tensor Processing Unit (TPU) AI/ML applications

CAMPBELL, Calif. June 9, 2019– Arteris IP, the world’s leading supplier of innovative, silicon-proven network-on-chip (NoC) interconnect intellectual property, today announced that Bitmain has licensed Arteris Ncore Cache Coherent Interconnect IP for use in its next-generation Sophon Tensor Processing Unit (TPU) systems-on-chip (SoCs) for the scalable hardware acceleration of artificial intelligence (AI) and machine learning (ML) algorithms.

Our choice of interconnect IP became more important as we continued to increase the complexity and performance of Sophon AI SoCs. The Arteris Ncore cache coherent interconnect IP allowed us to increase our on-chip bandwidth and reduce die area, while being easy to implement in the backend. The Ncore IP’s configurability helped us optimize the die area of our SoC, which permits us to offer our users more performance at lower cost.”


Haichao Wang, CEO, Bitmain

Topics: SoC NoC new customer performance AI chips ML/AI scalable hardware on-chip bandwidth