Arteris Articles

FlexNoC Version 3 available now!

We announced FlexNoC Version 3 today!

Our primary engineering goal with this totally new technology release was to increase the productivity of our SoC designer users.

As the size and complexity of our user’s SoC designs increased over the years, it had become increasingly difficult to visualize and optimize a huge design in a single GUI window. In addition, we saw the need to make the FlexNoC user interface adapt to whatever task the user is performing, rather than provide the same access to the many options within FlexNoC.

Under the hood, we increased the performance of all aspects of the product, not just user interface response but also performance modeling and exploration.

Here are the top 3 features in the new FlexNoC Version 3:

  1. Switch-based topology editor – It is now easier to create, characterize and modify large designs while keeping access to the entire SoC topology available within a single view.
  2. Topic- and activity-based user interface – Years of customer feedback and human factors research have resulted in a streamlined interface that makes it easier for SoC architects and designers to perform complex and repetitive tasks.
  3. NoC composition enhancements – Users can more easily break large interconnect designs into smaller modules for implementation by different sub-teams, and can quickly combine separate designs or modules into a single interconnect instance for integration.

NEXT STEPS:

Current customers can upgrade from the current version of FlexNoC to FlexNoC Version 3. Just contact your Arteris sales manager.

For prospective customers, please contact me and we'll get you started!

For more details, please read our press announcement below.

 

Topics: network-on-chip SoC design Arteris FlexNoC

Divide & Conquer: Dispersed global design teams need NoC fabric IP

It’s no surprise that most corporate system-on-chip (SoC) design teams are dispersed throughout the world, with different functional teams often located in different countries and continents. For example, we have many customers whose SoC architecture is defined in the United States, but subsystems such as graphics and signal processing are designed elsewhere. Companies choose this "divide and conquer" approach in the hopes of reducing design costs without affecting time to market.

But are there hidden perils to the divide and conquer SoC design approach? And what can be done to avoid these problems?

Dividing SoC design tasks is easy. Reassembly is not.

According to a recent survey of our customers, the management of global design teams doing geographically distributed development is one of the most important issues facing SoC design managers today. After much personal research and interviewing of respondents, I discovered that the act of separating a system-on-chip architecture into different work packages and “parceling out” the implementation of SoC subsystems to different design teams is not the major issue. Rather, the big problems occur once the team attempts to reassemble the various subsystems into a single SoC design.

And the real killer is that most of these problems are not discovered until very late in the design cycle, after SoC reassembly and top-level verification are attempted.

This results in schedule slips.

To understand what is happening, imagine an IP subsystem design team that receives a specification from headquarters for the company’s latest SoC design. The design team implements its particular subsystem to integrate with the rest of the SoC design according to their best understanding of the overall specification.

Of course, the SoC specification is constantly changing and the design team members do not always receive spec updates in a timely manner. Furthermore, there are ambiguities in the SoC specification that must be addressed directly with the SoC architect, or ignored for convenience with assumptions made by the subsystem design team documented in release notes.

As the schedule progresses, the top-level SoC team receives the IP components from the different subsystem IP teams and assembles the subsystems into the final SoC design for verification. But these blocks are not connecting. Transaction protocols, addresses locations and registers are not mapping to what the SoC assembly team is expecting.

What happened?

Changes cause schedule slips

In the software world, we might call this a “change control” problem. As each IP subsystem team progressed with their own designs, they attempted to keep up with changes in the overall SoC specification for on-chip connectivity (transaction protocol definitions, register definitions, and memory map locations, for example). However, as the definition of the overall SoC spec was changing, so was the spec for their own pieces of the design.

The result is that even though all the SoC subsystems passed their own block-level verification test, it is impossible to assemble and verify the SoC design with these blocks. What happens in the real world is that the top-level (SoC-level) verification team members find these connectivity failures and report them to the SoC team and the IP subsystem teams. Then everybody works together to hack the RTL so that everything connects correctly and passes verification. And the SoC schedule slips.

New NoC technology enables easy SoC reassembly

The good news is that this scenario happens less frequently today than it did only a couple of years ago. On-chip NoC interconnect fabric technology makes it easy to separate a complex, multi-hundred IP block SoC into multiple subsystems that can be implemented anywhere in the world. And new technology allows these pieces to be automatically reassembled into a verifiable SoC no matter what changes were independently made to the transaction protocol definitions, registers or address locations of any part of the SoC.

As the most advanced process nodes migrate down to 20nm and 14nm, the “sweet spot” for most systems-on-chip will descend from today’s 65nm down to 40nm and lower. For SoC makers to realize the cost benefits of 40nm processes and smaller, they will have to put the semiconductor IP of what used to be in two or three chips onto a single SoC. And to do this they will most likely have to work through the challenges of having parts of each SoC designed by multiple design teams located throughout the world.

Today’s advanced NoC technology guarantees that these teams will be able to successfully implement a global design methodology and reap all the cost benefits of smaller process nodes.

Topics: SoC economics network-on-chip SoC design NoC composition

The SoC Interconnect Fabric: A Brief History

The high functional integration of system-on-chip designs today is driving the need for new technological approaches in semiconductor design. Anyone who owns a Samsung Galaxy S4, HTC One or comparable smartphone can see the benefits of integrating onto one chip all the computing functions that were traditionally separate, discrete chips on a PC computer motherboard. For next-generation devices, developers are driving even greater computing power, higher resolution graphics, and improved media processing into the integrated SoCs that enable these systems. This high level of integration is causing on-chip communications and transaction handling to become a system constraint within the SoC, limiting the achievable performance of SoCs no matter how optimized the individual CPU, GPU and other IP blocks.

Topics: NoC network-on-chip bus crossbar history

System-Level Design Arteris CTO interview: Faster IP Integration

Faster IP Integration

By Ed Sperling
System-Level Design sat down with Laurent Moll, chief technology officer at Arteris, to talk about interoperability, complexity and integration issues. What follows are excerpts of that conversation.

SLD: What’s the big challenge with IP?


Moll: Interoperability is always a concern. Because of ARM’s dominance, a lot of people are moving to AMBA protocols, whether that’s APB or AXI. The bigger companies typically have something they’ve developed internally, or an existing protocol they’re using. They tend to still have a sizeable legacy piece. They will move away from that eventually, but it will take time. Anytime the entire environment is built around that, it takes a while. The only other thing we have involving interoperability are port interfaces for memory controllers. There’s a lot of baggage around there. 

SLD: But there also are lots of little processors on an SoC, too. What impact does that have?
Moll: There are lots of things happening in a modern SoC. In mobile SoCs, there are subsystems for cameras, video and a lot of hardware acceleration. They started from the main CPUs from ARM and it trickled down to the subsystems, and that’s taking over a lot of the SoC. From a verification perspective and an assembly perspective, people don’t want to deal with too many things. So if one of them is dominant, you might as well use it as often as possible.

SLD: Where does the network on chip technology fit in?
Moll: It’s everywhere. We like the fact that people are standardizing, because if IP comes in with a standard protocol like AMBA it means we can connect to it more easily. What happened before was people were essentially growing chips rather than assembling them. It meant that the interfaces to all the IP blocks were custom. It was hard scale beyond a certain point.

SLD: So what you’re starting to see are the first signs of a maturation of the commercial IP business?
Moll: That’s correct. A lot of companies have moved to a model where, instead of having one flat organization, it’s more of a silo-type of organization where you have a lot of people building subsystems and a separate group assembling them. This whole process, as we see it, is the maturing of the industry. It’s possible to assemble a chip. It’s not easy, but it is the fastest and most efficient way.

SLD: Does that make it easier to choose one IP block versus another?
Moll: Absolutely. People can try different things. They can swap them in and out very easily. And we’re also starting to see virtual prototypes where the software vendors are building hardware that will never actually tape out. But they can test their software and how it works with different pieces from different vendors. This is the first time we’re seeing platform assembly bubbling up the food chain to people building software or systems. If you’re Microsoft, for example, for Windows 8 you can build a virtual platform with a NoC and test out how it operates on an ARM processor or a GPU. This platform will never exist in reality, but it can run tests, it can run software stacks and you can shop it around to vendors. It is part of the maturation.

SLD: If you’re comparing one piece of IP to another, what is it like today versus five years ago?
Moll: Five years ago, a lot of IP was internal rather than commercially available. Interoperability was a problem. You had this thing that you grew internally that didn’t connect to anything very easily. Then you had this other IP with a standard interface and you couldn’t connect them. People still build internal IP, but they build them in a way that they can be connected easily. Even now nothing connects seamlessly. AMBA helps because it’s a standard, but it’s more of a catalog of things you could do with the interface. So there is still a lot of tweaking. You can connect them in a basic fashion. But if you really want take advantage of all of this IP, there are still some nuts to turn and things that are necessary to make them work together really well.

SLD: Is the goal lower NRE or time-to-market improvements with the same NRE?
Moll: For the consumer markets, time to market is everything. And it’s time to market not just for the first chip and making sure it works, but also for the 10 derivatives they’re going to make. In the past, it was like a butcher shop. You had to cut things up carefully and make sure it all still worked. Our largest customers can just crank out derivatives where most of the work is on the back end. The interconnect is in place, and they just take one thing off and replace it, re-do all the performance regression and they’re done. NRE is less important for them, because when you’re doing large volumes that doesn’t show up. Missing one day on the market does.

SLD: That’s time to market in a small slice of a market, too, right?
Moll: Absolutely, and this is why platforms are so important. Making very big chips work well is still a difficult process. You still have to worry about performance, use cases, power, security, and all these types of things. So it takes awhile to get a big platform together. But once you have one that works, you can create derivatives, shrink it, and customize it for all these niches very quickly. That’s where the time to market comes into play.

SLD: There has been a lot of talk about platforms over the years, but they’ve been slow to catch on. What’s changed?
Moll: With the most complex chips, people are moving to platforms. There is so much NRE invested into one thing that you want to be able to get your return. There is a lot of verification and checking the back end to make sure it works. You want to make sure this one thing works really well, and then you use it for two years or three years. You get quite a bit of use out of it. So it’s true that platforms aren’t universal yet, but we do see them with companies that need to build one really complex thing. They invest in a platform so they don’t have to invest a lot of money and time in derivatives.

SLD: We’re starting to see the rise of subsystems, which are a step in that direction.
Moll: Subsystems have been around for quite some time. The subsystem is, in many ways, a political entity in many companies. It’s a silo issue. So the guys in imaging are all the guys who know imaging. The guys in 3D are all the guys who know 3D. They tend to have their own requirements. If they need a microcontroller, they go around the company looking to see if there’s a microcontroller. Or do they contract with ARM for an M0. That has existed for a while. What’s newer is that the interface to that subsystem is becoming standard, and the components inside the subsystem are becoming standard. For the interconnect, we’re finding that companies are using our technology between the subsystems as well as inside the subsystems. When you assemble things that are built to work together, the probability is higher that they will work.

SLD: IP has always been a black box, and subsystems increasingly are collections of IP. What does that do for connectivity?
Moll: It depends on the subsystem, which can look completely different. For a security subsystem, where you have CPU, an SRAM and a bunch of devices, this is like a small system, so it’s better to have transparency. When you get into QoS and security, at the top level it’s easier to understand if it’s not a black box. Otherwise you may have to spend time looking at an interface and trying to understand what that interface does. We see that in the subsystems where the interconnect is used for what look like small systems. For a CPU, there are a bunch of things that hang together in a very specialized way. They don’t look like subsystems. They’re just a big block and there’s an interface to them.

SLD: If you’re creating an SoC with a NoC, where are you seeing issues in connecting everything up?
Moll: In the past five years there is a new job description called SoC architect. Before that there was just a chip architect. So for this new job, you have a bunch of IPs. You know what’s inside some of them and some of them you don’t, and your job is to put them together in such a way that works. The reason why this job exists is that the whole thing flattened it way too complicated, just as people 20 years ago realized that flattening the whole netlist to make a chip isn’t going to work as well. At the architectural level, the problems are transversal, such as power, performance, security and debug infrastructure. Something like a standard interconnect helps you solve a number of those issues. Then you just have to verify that it does what you want. The other thing involves all the stuff at the back end, which is also an assembly process. You also have subsystems, which may look different from the others. Some have hard macros, others don’t. There’s a whole top-level assembly process of all your clock-domain crossings, power domains, DFT. The assembly issue is making these cross-functional topics work together. That’s where you spend a lot of time, and then verifying everything.

SLD: The new wrinkle in this is software, which can interact at many different levels. Does the network now have to account for the software?
Moll: When you’re building your own RTL, it means it won’t be ready for software until late in the process. If you’re assembling parts, you can start assembling the chip early. You either have functional or RTL models, and you can create a virtual platform way before tapeout. We see a lot of people doing that. The advantage for the NoC is that you have RTL right away. It may not be the final RTL, but functionally for the software guys it’s good enough.

As featured in: 

Topics: NoC network-on-chip network-on-chip laurent moll IP integration network-on-chip fabric IP