Arteris Articles

Semiconductor Engineering: Computing Where Data Resides

Kurt Shuler, Vice President of Marketing at Arteris IP is quoted in this new Semiconductor Engineering article:

Computing Where Data Resides

March 29th, 2021 - By Ann Steffora Mutschler

Computational storage approaches push power and latency tradeoffs.

“Ten years ago solid state drives were new,” said Kurt Shuler, vice president of marketing at Arteris IP. “There really wasn’t anything like an enterprise SSD. There were little microcontrollers running on platter-type hard drives. That was where semiconductors were then. Since that time, so much has changed. A lot of startups were doing really sophisticated SSD controllers, and the problem initially was that NAND flash consumes itself while it’s operating, so you always have to check the cells. Then, once you find out they’re bad, you must rope them off and tell them not to save anything there anymore. If you buy a 1-terabyte SSD drive, it actually has more than 1 terabyte because it’s grinding itself to death as it operates. For the SSD controllers, that was the initial challenge. But now, storage disk companies have undergone a lot of consolidation. If you look at what’s going on computational storage, we have customers who are doing SSD storage and controllers for the data center that are focused on a particular application, such as video surveillance, so there is computation actually within those controllers that is dealing with that particular use case. That is completely new. Within that computation, you’ll see things like traditional algorithmic, if/then analysis. Then, some of it is trained AI engines. Any of the SSD, enterprise SSD controllers are heading in that direction.”

Topics: SoC NoC network-on-chip enterprise SSD semiconductor engineering arteris ip cache interconnects kurt shuler computational storage AI engines

Semiconductor Engineering: Domain-Specific Memory

Michael Frank, Fellow and System Architect at Arteris IP is quoted in this new Semiconductor Engineering article:

Domain-Specific Memory

March 11th, 2021 - By Brian Bailey

Rethinking fundamental approaches to memory could have a huge impact on performance.

“Remember video memories — DRAM with built-in shift registers?” asks Michael Frank, fellow and system architect at Arteris IP. “Perhaps GDDR [1-5], special cache tag memories, or associative memories back in the days of TTL? A lot of these have not really survived because their functionality was too specific. They targeted a unique device. You need a large enough domain, and you are fighting against the low cost of today’s DRAM, which has the benefit of high volume and large-scale manufacturing.”

Topics: SoC NoC network-on-chip semiconductor engineering arteris ip GPUs cache DRAM interconnects Michael Frank HBM

Semiconductor Engineering: Chiplets For The Masses

Michael Frank, Fellow and System Architect at Arteris IP is quoted in this new Semiconductor Engineering article:

Chiplets for The Masses

March 3rd, 2021 - By Brian Bailey

Chiplets are technically and commercially viable, but not yet accessible to the majority of the market. How does the ecosystem get established?

“I look at the pictures of Intel’s new chips, and it turns out there are eight compute tiles that could be called chiplets, put together with some strips in the middle that contain cache and interconnect tiles,” says Michael Frank, fellow and system architect at Arteris IP. “And it is all sitting on a silicon substrate. There are clearly places where it is worth the money, and worth the efforts. But this paradigm has to be built on standards. It needs to cover the electrical properties, communications, physical attributes, etc. You cannot build different chiplets for every company. No matter how you look at it, it is still a chip and you have to go through all the steps you normally would for a tape-out.”

Topics: SoC NoC network-on-chip moore's law semiconductor engineering arteris ip cache interconnects intel Michael Frank chips alliance darpa

Semiconductor Engineering: What Happened To Execute-In-Place?

Michael Frank, Fellow and Chief Architect at Arteris IP is quoted in this new article in Semiconductor Engineering:

What Happened To Execute-In-Place?

August 25th, 2020 - By Bryon Moyer

The concept as it was originally conceived no longer applies. Here’s why.

“Demand-paging virtual memory is nothing else than a cache,” noted Michael Frank, fellow and chief architect at Arteris IP . But then Android came available for free, unlike the planned OSes. So the strategy changed from one of demand-paging to moving the entire code base from flash to DRAM, and then using the SRAM cache mechanism to further manage instruction access times — all in the interest of lower cost.
 
Frank also stated, “My definition of execute in place is where you do not have an address change, where you execute in a cached way, and your original source of the code or the data is still at the same address that you are executing at.”
 
Topics: SoC NoC technology semiconductor engineering soc architecture AI cache DRAM noc interconnect IP market SRAM MCUs