Arteris Press Releases

Arteris IP Ncore® Cache Coherent Interconnect Licensed by Bitmain for Sophon TPU Artificial Intelligence (AI) Chips

Network-on-chip (NoC) interconnect enables faster performance and lower die area for Tensor Processing Unit (TPU) AI/ML applications

CAMPBELL, Calif. June 9, 2019– Arteris IP, the world’s leading supplier of innovative, silicon-proven network-on-chip (NoC) interconnect intellectual property, today announced that Bitmain has licensed Arteris Ncore Cache Coherent Interconnect IP for use in its next-generation Sophon Tensor Processing Unit (TPU) systems-on-chip (SoCs) for the scalable hardware acceleration of artificial intelligence (AI) and machine learning (ML) algorithms.

Our choice of interconnect IP became more important as we continued to increase the complexity and performance of Sophon AI SoCs. The Arteris Ncore cache coherent interconnect IP allowed us to increase our on-chip bandwidth and reduce die area, while being easy to implement in the backend. The Ncore IP’s configurability helped us optimize the die area of our SoC, which permits us to offer our users more performance at lower cost.”


Haichao Wang, CEO, Bitmain

Topics: SoC NoC new customer performance AI chips ML/AI scalable hardware on-chip bandwidth

Arteris IP FlexNoC® Interconnect Licensed by Baidu for Kunlun AI Cloud Chips for Data Center

NoC interconnect IP optimizes dataflow for revolutionary Cloud-To-Edge artificial intelligence (AI) system-on-chip (SoC) architecture

CAMPBELL, Calif. – January 15, 2019 – Arteris IP, the leading supplier of innovative, silicon-proven network-on-chip(NoC) interconnect intellectual property, today announced that Baidu has licensed Arteris IP FlexNoC Interconnect for use in its high-performance Kunlun AI cloud chip for data center.

The Arteris FlexNoC interconnect IP helps us greatly by enabling not only high bandwidth on-chip communications but also load-balanced data traffic to off-chip memory, all while simplifying our backend timing closure. In addition, Arteris IP’s strong local support team has been a trusted partner in our AI chip development projects.


Jian Ouyang, Principal ArchitectBaidu

Topics: new customer china machine learning artificial intelligence neural network AI chips training chips flexnoc ai package Baidu

Arteris IP Announces New FlexNoC® 4 Interconnect IP with Artificial Intelligence (AI) Package

Industry leading commercial interconnect IP accelerates development of next-generation deep neural network (DNN) and machine learning systems

CAMPBELL, Calif. – October 31, 2018 – Arteris IP, the world’s leading supplier of silicon-proven commercial network-on-chip (NoC) interconnect intellectual property (IP)today announced the new Arteris IP FlexNoC version 4 interconnect IP and the companion AI Package. FlexNoC 4 and the AI Package (“FlexNoC 4 AI”) implement many new technologies that ease the development of today’s most complex AI, deep neural network (DNN), and autonomous driving systems-on-chip (SoC).

Numerous startups are attempting to develop SoCs for neural-network training and inference, but to be successful, they must have the interconnect IP and tools required to integrate such complex, massively parallel processors while meeting the requirements for high-bandwidth on-chip and off-chip communications. Arteris IP has the experience and interconnect IP to help these companies succeed, and FlexNoC 4 with the AI Package provides the features required for AI chips in an easy-to-use and highly configurable form.


Mike Demler, Senior Analyst and Senior EditorThe Linley Group & Microprocessor Report

Topics: SoC design new product machine learning artificial intelligence neural network QoS AI chips training chips noc multicast ring noc flexnoc ai package mesh noc torus noc

Arteris IP FlexNoC® Interconnect IP Licensed by Iluvatar CoreX for Artificial Intelligence Application

 

CAMPBELL, Calif. – October 16, 2018– Arteris IP, the leading supplier of innovative, silicon-proven network-on-chip (NoC) interconnect intellectual property, today announced that Iluvatar CoreX has licensed Arteris IP FlexNoC Interconnect for a deep learning SoC application. Iluvatar CoreX is a company focused on designing high-end / cloud computing chips and computing infrastructure software, with R&D centers in Nanjing, Shanghai, Beijing, and Silicon Valley.

We chose the Arteris FlexNoC cache coherent interconnect because of its design flexibility and market leading power, performance and area results. Using FlexNoC interconnect IP will allow us to get exactly the type of interconnect that we need for our SoCs, backed up by strong local support.”


Yunpeng Li, Chairman and CEO, Iluvatar CoreX

Topics: SoC design new customer china neural networks neural network flexnoc interconnect deep learning AI chips