As the drumbeat grows louder among technology leadership circles to more deeply embed AI agents into software development lifecycles — in some cases, even using agentic AI to replace even midlevel developers — application security (AppSec) is about to go from complex to a lot more complicated.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
Aquia chief executive Chris Hughes, said the industry is abuzz about the hype of agentic AI, or AI agents, expanding the promise of AI by allowing functions to take place either semi- or fully-autonomously — before the security implications have been fully assessed. “While there is tremendous potential and nearly unlimited use cases, there are also key security considerations and challenges," Hughes said.
Like so many transformational advances, security teams will get nowhere by trying to obstruct agentic AI. Security leaders and teams must prepare the organization in lockstep with these new AI agents with new visibility, controls and governance for the entire software development lifecycle (SDLC).
Here's what your AppSec team needs to know about what's coming with agentic AI — and how to manage risk with increasing SDLC complexity.
[ Get White Paper: How the Rise of AI Will Impact Software Supply Chain Security ]
Signs that the agentic AI genie is out of the bottle
Agentic AI, artificial intelligence systems designed to make autonomous decisions and actions within business systems, are not a new phenomenon. What is new is that enhancements to natural language processing (NLP) and more advanced reasoning of LLMs is making agentic AI capable of making more complex, chained decisions — and then adapting them to less-defined use cases for the business.
These increases in the capabilities and versatility of agentic AI is appealing to enterprises. Gartner estimates that by 2028, about 35% of software will utilize AI agents — and that they will make it possible to automate at least 15% of today’s day-to-day work decisions. This estimate encompasses automatable tasks across a range business functions, from sales to project management.
Tom Coshow, senior director analyst for Gartner, wrote recently that one business function in particular was going to go first: software development. “Software developers are likely to be some of the first affected, as existing AI coding assistants gain maturity," Coshow wrote.
Recent coverage by Axios emphasized that agentic AI was poised to land in 2025. Meta’s Mark Zuckerberg and others within the software development world, urge that AI agents are not just poised to replace developers at some far-flung future date — but in fact will do so over the course of 2025. Zuckerberg told Joe Rogan in recent interview: "In 2025 we at Meta, as well as the other companies that are basically going to have an AI that can effectively be a sort of midlevel engineer,” Zuckerberg said.
While the advancements with AI are exciting from an engineering perspective, and because they’ll bring significant tech and business risks to the application stack, there’s no putting the agentic AI genie back in the bottle, experts agree.
Agentic AI builds on low-code and no-code
In many ways, agentic AI is extending what the low-code and no-code movement started years ago in its push to arm citizen developers and streamline development workflows. Many of today’s coding assistants and automated AI agents evolved from low-code/no-code platforms.
Some development experts say that agentic AI is poised to ‘blow up the business process layer,’ replacing business logic for business process workflows — in many cases, a huge chunk of the development work that engineering teams work on today, whether it is for integration or whole new applications, Ed Anuff, chief product officer at Datastax, wrote in a recent think piece about AI agents at The New Stack.
“When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems."
—Ed Anuff
Know the risks of agentic AI in development
In many ways, agentic AI will serve to abstract security problems. Organizations will need to build safeguards and governance around how the agents operate and around the security of the code and the models that run them, while still maintaining and improving all of the traditional guardrails for the security and quality of code and logic that’s produced either by humans or AI, said Dhaval Shah, senior director of product management for ReversingLabs.
"Securing AI in development is like playing chess where the pieces move by themselves. With AI in development, not everything that can be secured can be seen, and not everything that can be seen can be secured.”
—Dhaval Shah
In particular, agentic AI ratchets up the risks of software supply chain security, Dhaval said, explaining that the addition of AI agents to the development workflow challenges traditional models in two big ways.
"First, AI agents blur traditional trust boundaries by seamlessly mixing proprietary, open-source, and generated code, making traditional software composition analysis ineffective. Second, they introduce new dependencies we can't easily track or verify, from model weights to training data, creating blind spots in our security monitoring.”
—Dhaval Shah
Shah said there are three major risks that AppSec pros will need to stay ahead of as agentic AI takes hold within their development organizations:
Dependency chain opacity
As AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies, supply chain blind spots are going to grow bigger and more plentiful, Shah said. “Agentic AI creates blind spots in our security visibility. Unlike human developers who might carefully vet a library, AI can pull from numerous sources simultaneously, making traditional dependency tracking insufficient,” he said.
Expanded attack surface
As agentic AI-driven coding assistants grow more sophisticated in executing multi-step, chained software engineering tasks, they’ll be touching and interacting with a broader range of systems, applications, and application programming interfaces (APIs). This is going to expand the attack surface of not only the applications but the development stack itself.
“This interconnected nature creates a broader attack surface where a single weak link can compromise the entire workflow. For example, an AI agent coordinating a supply chain could be exploited to inject malicious instructions across multiple systems.”
—Dhaval Shah
Emergent behaviors
As AI collaborates with human developers, emergent vulnerabilities may arise from unforeseen interactions between AI-generated snippets and hand-crafted code, Shah said. “This blend can create novel, complex failure modes that defy traditional testing and threat models.”
For example, research is already emerging showing how attackers are turning their sights to open AI models to establish new and novel malware attack techniques. ReversingLabs research recently outlined one such scheme that targeted Hugging Face with models containing malicious code designed to avoid that platform’s security scanning mechanism.
How security teams can come together
Security professionals will need to collaborate to stay abreast of the risks presented by agentic AI, and create the right blend of visibility, controls over an increasingly complicated SDLC. OWASP recently introduced new threats and mitigations guidance that is focused on agentic AI, complete with concrete threat modeling information and advice on early mitigation strategies.
Aquias Hughes highly said a recent thought piece entitled "Governing AI Agents" by Noam Kolt of the Governance of AI Lab at Hebrew University should be required reading for AppSec teams.
“As we prepare to see pervasive agent use and implementation, we need to address many issues related to agentic governance."
—Chris Hughes
ReversingLabs' Shah said security leaders should also get ready with a two-pronged approach that balances both strategic oversight with immediate controls, because agentic AI is already here. That means deploying AI-aware monitoring that tracks both code generation and dependency inclusion, creating automated security gates that match AI development speed, and establishing clear boundaries for AI tool usage in critical code.
On the broader strategic front, Shah said organizations will need to implement trust-but-verify automated security baseline checks, and to maintain human-review check points for security-critical change to code and logic. He also recommends that wherever possible, teams should be running AI development in contained environments with defined boundaries.
“Think of it like giving AI a sandbox to play in, but with clear rules and constant supervision. The key isn't containing AI — it's channeling its power within secure guardrails."
—Dhaval Shah

Article Link: Agentic AI and software development: Here's how to get ahead of rising risk