Arteris Press Releases

Arteris IP Announces New FlexNoC® 4 Interconnect IP with Artificial Intelligence (AI) Package

Industry leading commercial interconnect IP accelerates development of next-generation deep neural network (DNN) and machine learning systems

CAMPBELL, Calif. – October 31, 2018 – Arteris IP, the world’s leading supplier of silicon-proven commercial network-on-chip (NoC) interconnect intellectual property (IP)today announced the new Arteris IP FlexNoC version 4 interconnect IP and the companion AI Package. FlexNoC 4 and the AI Package (“FlexNoC 4 AI”) implement many new technologies that ease the development of today’s most complex AI, deep neural network (DNN), and autonomous driving systems-on-chip (SoC).

Numerous startups are attempting to develop SoCs for neural-network training and inference, but to be successful, they must have the interconnect IP and tools required to integrate such complex, massively parallel processors while meeting the requirements for high-bandwidth on-chip and off-chip communications. Arteris IP has the experience and interconnect IP to help these companies succeed, and FlexNoC 4 with the AI Package provides the features required for AI chips in an easy-to-use and highly configurable form.


Mike Demler, Senior Analyst and Senior EditorThe Linley Group & Microprocessor Report

Topics: AI chips SoC design training chips QoS neural network new product flexnoc ai package noc multicast mesh noc ring noc torus noc machine learning artificial intelligence

Arteris IP FlexNoC® Interconnect IP Licensed by Iluvatar CoreX for Artificial Intelligence Application

 

CAMPBELL, Calif. – October 16, 2018– Arteris IP, the leading supplier of innovative, silicon-proven network-on-chip (NoC) interconnect intellectual property, today announced that Iluvatar CoreX has licensed Arteris IP FlexNoC Interconnect for a deep learning SoC application. Iluvatar CoreX is a company focused on designing high-end / cloud computing chips and computing infrastructure software, with R&D centers in Nanjing, Shanghai, Beijing, and Silicon Valley.

We chose the Arteris FlexNoC cache coherent interconnect because of its design flexibility and market leading power, performance and area results. Using FlexNoC interconnect IP will allow us to get exactly the type of interconnect that we need for our SoCs, backed up by strong local support.”


Yunpeng Li, Chairman and CEO, Iluvatar CoreX

Topics: new customer AI chips SoC design china neural network deep learning neural networks flexnoc interconnect

Arteris IP FlexNoC® Interconnect IP Licensed by Enflame (Suiyuan) Technology for Multiple Artificial Intelligence (AI) Chips

Network-on-Chip (NoC) interconnect IP provider enables faster AI training in cloud datacenter

CAMPBELL, Calif. – October 9, 2018 – Arteris IP, the world’s leading supplier of silicon-proven commercial network-on-chip(NoC) interconnect intellectual property (IP), today announced that Enflame (Suiyuan) Technology has purchased multiple licenses of Arteris FlexNoC interconnect IP for use as the on-chip communications backbone of their artificial intelligence (AI) training chips for use in cloud datacenters.

Arteris FlexNoC interconnect IP is the only interconnect that would allow our AI chips to achieve their high bandwidth requirements while also meeting our QoS requirements. Using Arteris NoC technology allows our architecture to take maximum advantage of state-of-the-art HBM2 memories to avoid system-level data starvation, which is major problem with less efficient AI training chips.”


Arthur Zhang, COOEnflame

Topics: new customer AI chips SoC design training chips QoS china neural network

Arteris IP and Synopsys Accelerate the Optimization of Heterogeneous Multicore Neural Network Systems-on-Chip

Ncore Cache Coherent Interconnect IP and Synopsys Platform Architect fast-tracks integration for autonomous driving and artificial intelligence (AI) markets

CAMPBELL, Calif. — January 30, 2018 — Arteris IP, the innovative supplier of silicon-proven commercial system-on-chip (SoC) interconnect IP, today announced the integration of its Ncore Cache Coherent IP with the Synopsys® Platform Architect™ virtual prototyping solution to provide designers of neural network and autonomous driving SoCs with the ability to analyze system-level performance and power consumption earlier in the design cycle for their next-generation multicore architectures.

Combining Platform Architect and Ncore System models provides designers with the ability to analyze and optimize an entire heterogeneous multicore SoC architecture before RTL is available.


Eshel Haritan, Vice President of R&D, Verification Group, Synopsys

Topics: Ncore Synopsys Platform Architect SystemC neural network artificial intelligence ncore cache coherent interconnect