Arteris IP's Kurt Shuler, Vice President of Marketing, comments about the edge emerging as a particular security concern because some of the devices can kill you, covered in this Semiconductor Engineering article;
Using AI Data For Security
February 20th, 2019 - By Ann Steffora Mutschler
Pushing data processing to the edge has opened up new security risks, and lots of new opportunities.
The edge and beyond
“It’s cars and robots and medical devices,” said Kurt Shuler, vice president of marketing at Arteris IP. “These things can kill you two ways. A cosmic ray can cause a bit to flip, and things go awry. The other way is that the AI may work as intended, but what it decides to do from its neural net application is the wrong thing. In that case, the safety of the intended function is bad.”
There’s even a new spec just for this: “ISO/PAS 21448:2019 Road vehicles — Safety of the intended functionality.” That captures how to analyze these AI powered systems going into cars, so they works as designed.
Security can impact all of these systems. “There’s a totally separate set of specs, and a totally separate set of Ph.D. geeks working on safety and on security,” said Shuler. “What’s disconcerting is that the effects of any of these things, especially from a functional safety standpoint and a security standpoint, can be the same. Whether a bit flips or an engineer flipped a bit, someone can get hurt. Yet these sets of experts don’t really talk to each other too much. This was addressed in the new ISO 26262 2018 specification that came out in December, which includes specific text to address this. It basically says you must coordinate with security guys, but unless security is somehow mandated to a certain level — like functional safety is in cars and trains and other verticals — nobody really cares. It’s like insurance. Nobody wants to pay for too much security.”
For more information about ISO 26262:2018 Part 11, please download this presentation "Fundamentals of ISO 26262 Part 11 for Semiconductors".