Skip to main content
← Resources
May 1, 2026

Security Teams Are Paralyzed — Investigate Only 37% of 4,300 Daily Alerts

Published by ITSPmagazine

The numbers are stark. Enterprise security teams receive an average of 4,330 alerts per day. They investigate 37% of them. The rest go unexamined. Not because analysts are not paying attention. Because the volume has outpaced the capacity to respond.

Crogl commissioned the Ponemon Institute to survey 649 security practitioners across North America. The goal was to put hard numbers behind a problem most practitioners already know is real. ITSPmagazine covers what the research found: where AI is working in the SOC, where it is not, and what separates high-performing teams from the rest.

The Alert Investigation Gap

The finding is structural. At 4,330 alerts per day, no team can work the full queue. 42% of in-house SecOps teams have 3 to 5 full-time staff. Even with AI assistance on every alert, teams at that scale cannot physically process thousands of events per shift.

The alerts that go uninvestigated are not obviously benign. They are the ones nobody reached. That 63% gap is not a backlog. It is exposure that accumulates shift over shift.

Tools that classify and score alerts still require a human to investigate the high scorers. When volume outpaces capacity, a smarter queue is still a growing queue. The answer is investigation, not prioritization.

Where AI Helps and Where It Falls Short

62% of organizations in the survey have adopted AI in the SOC. Among those, 67% say AI accelerates alert resolution. That is a real signal.

But two barriers keep coming up. Half of security teams cite workflow integration as their top obstacle. 49% point to dispersed, hard-to-normalize data. These are not technology problems. They are architecture problems. AI that requires clean data before it can operate will stall on the same fragmentation that already slows analysts.

One more concern cuts across the whole category. 61% of practitioners worry that security vendors may use their data to train their own AI models. For organizations handling regulated or sensitive telemetry, that is not a hypothetical concern. It is a deployment constraint.

What High-Performing Teams Do Differently

The survey shows a clear gap between high-performing SOC teams and the rest. The difference is not headcount. It is whether AI operates on investigation, not triage.

High-performing teams use AI to conduct investigations. The system assembles context, queries across data sources, and produces a documented finding before an analyst is involved. 57% of all practitioners surveyed said freeing analyst bandwidth is the primary benefit of AI in the SOC. High-performing teams are the ones actually achieving it.

45% of SOCs in the survey already protect an air-gapped network. For those organizations, capability is not the question. The question is whether the AI system can run inside a constrained environment without data leaving the boundary. Most enterprise AI security tools cannot.

Read the full research: 2026 State of SecOps Report

Frequently asked questions

Why do security teams investigate only 37% of alerts?

The constraint is capacity, not skill. Enterprise SOCs receive an average of 4,330 alerts per day. 42% of in-house teams run with 3 to 5 full-time staff. The math does not work regardless of analyst experience. The alerts that go uninvestigated are not benign by design. They are the ones the team did not reach. Alert classification tools can sort and score the queue, but they still require a human to investigate the high scorers. When volume outpaces capacity, the queue grows.

How is AI being used in the SOC today?

62% of organizations surveyed have adopted AI in the SOC. Among those, 67% report that AI accelerates alert resolution. The most common use is assistance: AI generates a recommendation, and an analyst acts on it. A smaller segment of high-performing teams has moved past assistance into autonomous investigation, where AI conducts the full investigation before an analyst is involved.

What is the biggest barrier to AI adoption in security operations?

Two barriers are nearly tied. Half of security teams cite workflow integration as their top obstacle. 49% identify dispersed, hard-to-normalize data as an equal barrier. Both point to the same issue: most AI security tools require clean environments. Enterprise SOCs are not clean environments. Data lives across SIEMs, EDR platforms, cloud logs, and ticketing systems, often in incompatible formats. AI that requires normalization before it can operate inherits the same fragmentation problem analysts already face.

What data governance concerns exist around AI security tools?

61% of practitioners worry that security vendors may use their data to train their own AI models. For organizations handling regulated, classified, or sensitive telemetry, this is a deployment-blocking concern. It is also one of the reasons 45% of SOCs already operate air-gapped networks. Cloud-native AI tools cannot run in those environments.

What do high-performing SOC teams do differently with AI?

High-performing teams have moved from AI-as-assistant to AI-as-investigator. They deploy AI to conduct investigations without waiting for human direction. The system assembles context, queries data sources in native formats, and produces a documented finding before a human is involved. 57% of all practitioners surveyed said freeing analyst bandwidth is the primary benefit of AI in the SOC. High-performing teams are the ones actually achieving it.

Talk to the team.