Semiconductor Engineering: The Evolving Role Of AI In Verification

Ann Mutschler, Mar 26, 2025

Semiconductor verification is changing to integrate AI with human expertise.

Experts At The Table: The pressure on verification engineers to ensure the functional correctness of devices has increased exponentially as chips have gotten more complex and evolved into SoC, 3D-ICs, multi-die chiplets and beyond. Semiconductor Engineering sat down with a panel of experts, which included Josh Rensch, director of application engineering at Arteris; Matt Graham, senior group director verification software product management at Cadence; Vijay Chobisa, senior director for product management for the Veloce hardware-assisted verification platform at Siemens EDA; and Frank Schirrmeister, executive director, strategic programs, System Solutions at Synopsys. What follows are excerpts of that discussion. Click here for part one of this discussion. Part two is here.

SE: With AI, do we need verification experts with a generalist perspective who understand how you can apply AI to make you more productive? That isn’t something AI can tell you, particularly when it comes to knowledge of the verification flow, the tools, how they all work together, how interdependent they are.

Graham: We’re in the phase of AI where you need fairly deep expertise, because we’re still at the point of, ‘I don’t really trust it,’ or, ‘It doesn’t quite work.’ When I first started my career, somebody had wired up a breadboard with discrete components because they weren’t sure the FPGA was going to do what we thought it was going to do. Nobody does that anymore, but we’re that phase of AI, for verification or whatever, where somebody really smart has to look at the output and say AI is actually doing what it’s supposed to be doing. Hopefully we get this right before we all retire.

Schirrmeister: We’re approaching the era of snicker testing being so important, such that you look at it and you intuitively feel this can be right. It reminds me of my physics teacher who was upset with me because I was able, for a particular problem, to go through the motions of all the math and the process and everything, but my result was like three orders of magnitude off because I had some made some wrong assumption. He said, ‘Frank, you didn’t understand the basic problem. You followed the process, and I’m really upset that I give you an A minus, but your input was wrong. You clearly didn’t understand the snicker test at the end because you were three orders of magnitude off. I’m upset at you personally, but here’s your A minus.’ That’s how it is with AI. It may give you completely wrong results at the end. You need a basic knowledge of how to use it and how the results survive the snicker test, where I’m not laughing about the results because they are so obviously wrong. How do we get that into people? I don’t know. It’s an education problem. It’s a learning problem. When I was at an event the other day with students, I was asked, ‘What is the one thing you recommend us to do?’ I recommend you do what we in Germany call looking over the border of your plate. Figure out the guy next to you, the adjacency, and get a basic understanding what this guy does. What is EMI? What is thermal? How does what I do in my element, my verification, impact design and verification? How does that impact power? How does that impact thermal, and so forth? This interdisciplinarity exchange helps you get to know the generics of everything a little bit. With AI, that balance will be readjusted, because you can give it some of the more mundane tasks. But you still need to be able to identify whether that result is okay. If one thing was off on the slide, the whole slide would automatically be invalidated. So that’s the snicker test. Does it validate? In other words, you need to up level when it comes to messaging, but you need to really take a higher view and try to understand that within that realm of formal versus simulation, versus emulation, versus prototyping, is that actually in itself correct? That’s where the broadness comes to. But then, if you are going into an AMS-type simulation problem, you’ll bring in an AMS expert and you’re wondering whether that is a valid approach.

Rensch: There are a number of these ‘doesn’t smell right’ problems with technical papers. Most engineers are afraid of looking stupid, so if they don’t understand something, they won’t question it. I don’t have this fear, so I will ask questions if a paper does not make sense.

To read the full article on SemiEngineering, click here.

Subscribe to Arteris News