Kurt Shuler, VP of Marketing at Arteris IP, comments on 'What needs to be accelerated' in this Semiconductor Engineering article:
What Makes A Good Accelerator
October 25th, 2018 - By Ann Steffora Mutschler
Optimizing processor architectures requires a broader understanding data flow, latency, power and performance.
As with any other type of optimized hardware, the how of acceleration depends on what must be accelerated. This depends heavily on the software algorithms, and sometimes there are multiple algorithms in the design.
“It’s software-driven hardware design, so it’s kind of backwards,” said Kurt Shuler, vice president of marketing at Arteris IP. “In the old days you’d build a chip and then say, ‘Given this chip, what software can I run on it? And how do I tweak the software to make it more efficient?’ With neural net processing, because you’re trying to get this super huge increase in efficiency, you’re saying, ‘Given the algorithms I’m trying to do, what do I have to do in the hardware?’ And there are two aspects to that. One is how finely to slice the mathematical problems into how many different types of hardware accelerators there are. Two, within the architecture, how do I connect it for the data flow?”
As such, the algorithm is more important than both the software and the hardware, he said. “Ideally you would accelerate everything as much as possible and just have a tiny little bit of software on there that just does station-keeping type stuff. If you wanted to map the best performance, the more you have in the hardware itself, the faster it’s going to run. So each of these teams is doing a calculation as far as, what’s the software bill of materials to do this, what’s the hardware bill of materials, and it’s a slider that they can work with.”
To read the entire article, please click here: