With the advent of AI, security teams are reevaluating how they operate, the tools they use, and where the most significant gaps and opportunities lie. To better understand security teams’ current mindset on this emerging technology, we recently conducted an informal pulse survey of cybersecurity leaders and practitioners to explore their pain points and uncover how they plan to leverage AI to solve them.
Introduction to Malware Binary Triage (IMBT) Course
Looking to level up your skills? Get 10% off using coupon code: MWNEWS10 for any flavor.
Enroll Now and Save 10%: Coupon Code MWNEWS10
Note: Affiliate link – your enrollment helps support this platform at no extra cost to you.
While the survey was not exhaustive, the results show that many SOCs remain trapped in reactive operations and manual workflows. AI is emerging not just as a technology shift, but as a strategic opportunity to enable more confident, proactive security outcomes.
SOCs Remain Locked in a Reactive State
The survey found that 70% of respondents said their teams spend 75% or more of their time on reactive tasks. Nearly 60% of respondents reported that their teams manually review over half of low-severity or informational alerts. The other roughly 40% of respondents said:
- These alerts were ignored or suppressed by default (11.4%)
- A minority were reviewed based on context or rules (15.9%)
- There was no visibility into this process (13.6%)
These answers paint a picture of SOCs buried in manual work and so over-inundated with alerts that they don’t even have time to properly triage and investigate low-severity alerts.
Yet, despite this heavy manual workload, 84% of respondents remain either moderately (52%) or very concerned (32%) that legitimate threats could be hiding in those same low-severity and informational alerts. This reveals a central disconnect: manual effort isn’t translating into greater peace of mind. SOC teams are locked in firefighting mode, and have little time left for proactive tasks that could significantly reduce their risk profile.
Traditional Automation’s Failed Impact
The demand for relief drove many security teams to embrace automation. However, based on the survey results, traditional automation has not fully delivered on its promise to reduce manual work and make it easier to identify threats.
When asked how often automated workflows result in false positives or unintended actions:
- 64% of respondents said issues occurred frequently or occasionally.
- Only 25% said they encountered issues rarely, and 2% reported never experiencing them.
These figures suggest that traditional automated playbooks and workflows don’t go far enough. They have proven to be too rigid and difficult to manage in today’s dynamic threat landscape. Their limitations in handling nuance and context have made it harder for teams to trust or fully scale automation as a dependable solution for proactive security measures.
Where the Community Sees AI Delivering Value Next
The shortcomings of traditional automation have led more security teams to explore agentic AI. Unlike rigid workflows, agentic AI offers the potential to adapt to evolving threats and operate with greater contextual awareness.
When asked where AI could most help SOCs in the next 12 months, respondents reported the following:
Agentic AI Use Case | Percent of Respondents |
Triage of low-severity or repetitive alerts | 64% |
Threat hunting and identification of novel attacker behaviors | 52% |
Correlation of related alerts across multiple tools | 52% |
Detection rule tuning and suppression logic optimization | 48% |
Investigation of suspicious activity (e.g., compiling context) | 46% |
Execution of containment actions (e.g., isolating hosts, disabling accounts) | 39% |
Generating incident summaries to accelerate understanding | 34% |
Orchestrating multi-step response actions across tools | 34% |
These answers reflect a clear appetite to automate SOC tasks that consume too much manual effort, such as triage, correlation, and containment. There is also an early interest in deploying AI agents in response processes. That said, the future of agentic AI use cases in security is clear: it’s to apply human-like decision making across the entire incident response lifecycle, but at a scale that only AI can deliver across every single alert.
From Busywork to Proactive Security
The survey results show the reality of today’s SOC environments. Manual approaches and traditional automation aren’t enough to make cybersecurity teams feel like they’re making a meaningful difference in reducing their organization’s risk of being breached. They’re stuck in manual review cycles or struggling with rigid workflows that leave them burnt out and feeling only marginally more secure.
While the reality of the situation is serious, the results also show us there’s a light at the end of the tunnel. Both cybersecurity leaders and practitioners see AI’s potential to reduce the toil of SOC work. There’s also an early understanding that there’s a broader opportunity to achieve total alert coverage, deepen investigations, and deliver security outcomes that manual work and static playbooks couldn’t scale to achieve.
That said, it’s clear that AI’s real value lies not in doing what humans already do faster, but in doing what humans don’t have time to do. That makes it a transformative force for proactive, risk-reducing security operations.
Ready to assess your SOC’s AI readiness? Take our quiz.
Survey Methodology
This pulse survey was conducted at the RSAC 2025 Conference and included responses from 44 participants. Respondents were CISOs, SOC directors, managers, analysts, and security engineers. All responses were self-reported and anonymous, covering time allocation, alert review practices, AI maturity, confidence levels, and plans for AI use.
The post How AI is Enabling More Proactive Security appeared first on Intezer.
Article Link: How AI is Enabling More Proactive Security - Intezer