Where GenAI intersects with threat modeling: 3 key benefits for AppSec

intersection-genai-threatmodelingAs application security (AppSec) security leaders seek to drive Security by Design initiatives in 2024, threat modeling is becoming more prevalent. In one recent study, 73% of companies said they do threat modeling of their software at least annually, and half said they do it for every release. And 74% of the surveyed organizations said they'll grow their threat modeling programs in the coming year.

While the rise of software supply chain attacks has made the need for threat modeling clear to a growing number of companies, it is a labor-intensive practice that's difficult to automate and requires many person-hours. But many practitioners hope that generative AI (GenAI) and large language models (LLMs) can help ease those burdens and speed the process.

Here are three major benefits that security teams can realistically expect from the intersection of GenAI and threat modeling — and what not to expect.

[ Related: 10 tips for building a threat modeling program | Special Report: The State of Software Supply Chain Security (SSCS) 2024 | Download Report: State of SSCS ]

1. Get a handle on threat modeling's subtasks

Prevailing opinion from AppSec and threat modeling experts is that GenAI is a long way from offering any kind of end-to-end automation of the threat modeling process. But they believe that when GenAI is targeted and limited in scope, it could help threat modeling teams, both experienced and beginner, to crush the subtasks of threat modeling.

Chris Romeo, co-founder and CEO of threat modeling firm Devici (and a co-author of the Threat Modeling Manifesto), said in a recent roundtable on AI in threat modeling that GenAI can be a useful tool for threat modelers, but it's not a panacea:

"I don't see a world where we just have the AI do the threat model and we would all sign off on it and say, 'Yeah, that's perfect!' It's not going to replace anything we do. But there's a world where that AI can help us be better at what we do. And I think that's the near-term value [proposition]."
Chris Romeo

Brook Schoenfield, author of many books on threat modeling and CTO of Resilient Software Security, said in the same roundtable discussion that AI shouldn't be the "great, grand replacement" of human threat modelers. The subtasks that AI could be used to assist or help automate could include creating data flow diagrams (DFDs) and generating potential threat scenarios. 

Schoenfield explained that AI can not only assist veterans to speed up their tasks, but also hold the hand of the less experienced.

"Let's look at really discrete problems and solve some of the things that will help. More importantly, help the hundreds of thousands or millions of developers who don't have all that attack knowledge and don't have the time to go get it."
Brook Schoenfield

Abhay Bhargav, chief research officer at the training firm AppSecEngineer, said this is exactly the approach he advocates. He's developing trainable methodologies for developers and security teams to speed up threat modeling using this approach. In a recent webinar, Bhargav said he believes this is the path for drastically cutting down on the time it takes for threat modeling teams to generate usable models.

"The approach I take and I teach is to go from this big, massive, contiguous task — we need to do a threat model — to breaking it down into component patterns and passing a pattern, along with all of your other input, into an LLM. Then the LLM generates the output for that pattern. For example, you could say, 'Please generate the security objectives for this system.' And then, 'Please generate the threat scenarios for those security objectives and these (additional) information assets.'"
Abhay Bhargav

2. Eliminate blank-page syndrome

One of the big benefits that GenAI can bring to threat modeling is kick-starting thought processes around potential attacks and vulnerable surfaces, said Kim Wuyts, a privacy engineer and threat modeling advocate who works as a manager of cyber and privacy for PwC Belgium. She noted in the recent threat modeling roundtable:

"For people suffering from the blank-page syndrome, you get something to get you going. It's not really automation as people like to think about AI. I put in this prompt with three sentences about a scenario, and look, I've got 20 useful threats. That's great, because it saves you some time, but it's just the low-hanging fruit."
Kim Wuyts

Schoenfield said threat modelers often "get stuck in only the places they know" and GenAI help a threat modeling team get out of thinking ruts.

"Just getting started is a big deal for people, and looking comprehensively after that. Asking the AI, 'What are all the domains I should look at for this system?' might actually be a huge win."
—Brook Schoenfield

3. Hand-hold your team with knowledge and well-timed guidance

AI in threat modeling could also act as a particularly informed assistant offering deep access to knowledge and well-timed guidance, said threat modeling advocate Izar Tarandach, senior principal security architect at SiriusXM and a participant in the threat modeling roundtable.

"I would love to see, not one LLM doing the whole thing, but small agents here and there being almost like Microsoft Clippy. It's like having a copilot while you're doing threat modeling that's helping you in those small tasks that you need to get a good threat model done. But it's not doing it for you; it is bringing you information and knowledge."
Izar Tarandach

Devici's Romeo said he likes to call this "AI-infused threat modeling," adding that hopefully, if it is done right, it will be something more than an irritating chatbot. This is what he's currently exploring in his own work.

"What we're doing is figuring out how we can infuse AI in certain points of the threat modeling system to make results better for you as the person or faster to generate. In a lot of cases, you won't even know where there's an LLM generating it. That's my goal."
—Chris Romeo 

This kind of hand-holding can be especially important for developers on the team who are not used to thinking like attackers, Tarandach said. Having GenAI as a resource could allow such developers to ask questions such as, "Given certain system parameters, how would you attack the system?" and that could be invaluable for kicking off threat modeling discussions, he said.

"It is not going to create a threat model for you, but it might very well inform a threat model for you."
—Izar Tarandach

3. Keep GenAI's limitations firmly in mind

As AppSec teams and developers seek to glean these benefits from AI-infused threat modeling, they've got to keep in mind that they come with about as many caveats as there are disclaimers. GenAI, in particular, is not very explainable even by the experts in the LLM world — and it is difficult to verify the integrity and accuracy of its outputs.

Romeo said that means you need to treat AI assistance like a new member to the threat modeling team who doesn't yet have a ton of experience. That machine-driven team member can uncover new ideas and bring them to the table, but it is the diversity and knowledge base of the rest of the team that should lead decisions and shape the final output of each threat model. "AI is not ready for prime time," he said. "AI is not ready to be in the critical path of security decisions."

PwC Belgium's Wuyts said that, for now, AI is relegated in threat modeling to the role of junior assistant.

"Go for AI as an assistant, not as a decision maker. If we don't understand it, then we cannot trust it."
—Kim Wuyts

Article Link: Where GenAI intersects with threat modeling: 3 key benefits for AppSec