Skip to main content
← Resources
May 7, 2026

Why Security Operations Keeps Failing: The Real SOC Problem

Why Security Operations Keeps Failing: Monzy Merza on the Real SOC Problem

Security operations is not failing because analysts are weak or because teams are not trying hard enough. It is failing because the structure of the work no longer matches the reality of the environment. That is the argument our co-founder and CEO Monzy Merza makes in Part 1 of his recent podcast appearance, drawing on years spent on every side of the problem.

Monzy spent years inside government research, then helped shape security operations thinking at Splunk and Databricks. After all of that, he made a deliberate choice that most executives never make. He took an individual contributor role inside a Fortune 100 bank SOC. No team to manage. No strategic remit. A keyboard, an alert queue, and direct exposure to the daily grind. He says there is no better teacher than doing the work alongside people who are struggling through it together.

That experience shapes everything he says in this episode, and it is why his read on the SOC starts from inside the problem rather than above it.

What Is Actually Broken in Security Operations

The industry keeps promising simplification. The operating reality keeps moving in the opposite direction. Tools multiply. Schemas diverge. Data lives in more places than ever. And the human cost of holding it all together has become unsustainable.

Monzy frames the issue around what he calls the physics of the work. Alerts arrive late. Tools are scattered. Workflows are brittle. Schemas are inconsistent. Volume keeps climbing. And on top of all that, the modern analyst is expected to combine an almost impossible range of skills before any real alert investigation can begin.

He puts it bluntly with an analogy worth repeating:

"We do not expect a baker to also be a cardiologist. Yet in security operations we routinely expect one analyst to understand email headers, packet capture filtering, cloud logging quirks, identity behavior, and internal policy guidance on AI usage or remote services."

That breadth is unreasonable. It is also the quiet reason so many SOCs feel like they are running on tribal memory and fumes.

Why Putting All Your Data in One Place Does Not Fix the SOC

For more than a decade, the dominant message in security operations has been to centralize. Pull everything into one repository. Normalize every schema. Give analysts a single pane of glass and the rest will follow.

Monzy used to promote that message. He now says it was wrong.

In real organizations, especially high-consequence environments like banks and utilities, the data is not in one place and never will be. Teams run Splunk and Databricks and Log Analytics at the same time. They use multiple SIEMs and multiple SOAR platforms in parallel. He shared one customer example with three SIEMs and two SOAR platforms inside a single organization. That is not a transition state. That is the steady state.

The promise of centralization assumes a world where data sources hold still long enough to be unified. They do not. Every new business system, every cloud migration, every acquisition, every regulatory change rewrites the map. So instead of simplification, teams end up on what Monzy calls an endless treadmill of data hygiene, transport, reduction, schema changes, and retraining.

The point is not that centralization is hard. The point is that centralization as a foundational assumption is flawed. The environment is inherently distributed. SOC modernization built on the opposite premise will keep failing for the same reason it has been failing.

The Hidden Tax of Tool Complexity

There is another cost most vendor narratives skip past. The work is hard not only because of threats. It is hard because the tools themselves get in the way.

Monzy describes what real investigation looks like inside a fragmented SOC. Analysts have to remember which system contains which data. They have to remember different schemas and query languages. They have to know which regions have ingestion lag, and how much. And they have to remember who on the team knows how to work a particular alert type, because that knowledge often lives in one person rather than in any documented runbook.

In many SOCs, Slack has quietly become the memory system. Who knows how to investigate this? Who set up that detection? Who has the credentials for the system that nobody touches anymore? That is the bottleneck. It is not analyst headcount alone. It is organizational memory and tool-mediated friction, both of which compound every time someone leaves.

Alert Volume Is Already Growing Faster Than Headcount

Layer the alert curve on top of all of this. Monzy cited year-over-year alert growth in large organizations of roughly 18 to 28 percent, and that is before broader AI adoption fully lands across the business. Our own State of SecOps research found practitioners facing 4,330 alerts per day on average, with only 37 percent ever investigated.

His warning on this point is one of the strongest in the conversation. AI adoption inside enterprises will create a net-new terrain of security exposure, in the same way cloud adoption did a decade ago. It will not just add a few more alerts. It will introduce entirely new categories of risk: shadow AI usage, transitive trust between agents and services, novel forms of misuse, and new policy questions that security teams have to investigate.

If teams are already overloaded, the next wave of operational demand will overwhelm existing models unless the operating system of the SOC changes.

What AI in Security Operations Should Actually Do

Monzy is direct about what AI in security operations is not. It is not a replacement for analysts. He says claims like that come from misunderstanding security, misunderstanding the work, or misunderstanding people. Often all three. We have written separately about why "just add AI agents" does not work for the same reasons.

His model is more practical and more honest. AI should take repetitive, undesirable work off the table. It should reduce the friction caused by tools and interfaces. It should help analysts act on their own intuition faster, instead of forcing them to memorize syntax, schemas, and system locations. It should support the human, not flatten the role into automation theater.

Three examples from the conversation make the point concrete.

The first is high-volume user-reported triage work. The investigation steps are well understood. The work is repetitive and undesirable. AI can handle the repetitive parts while humans inspect, approve, disagree, or modify the output. That is augmentation, not replacement.

The second is CISA advisory response. A new advisory arrives. Teams have to extract the key indicators, determine applicability to their environment, write queries against the right data sources, run them, reconcile findings, and assemble a report for leadership. That work has historically taken days. With the right AI support, one of our customers reduced it to minutes.

The third is the most interesting, because it shows what happens when an analyst has room to think. One customer took the platform outside its expected use case. They connected data from LinkedIn, Zoom, and submitted resumes to investigate a fraudulent job applicant scenario, the kind of bot infiltration story that has become more common in remote hiring. The platform did not fight the analyst's idea. It enabled it. The same multi-source pattern shows up in more conventional work like endpoint investigation, where signals have to be reconciled across EDR, identity, and network data before anyone can answer what actually happened.

The pattern across all three is the same. The best AI systems expand what practitioners can do. The worst ones constrain practitioners to brittle predefined flows.

Why Transparency and Auditability Are Non-Negotiable

If AI is going to do real work in a SOC, teams have to be able to inspect what it did. Monzy treats this as a design requirement, not a feature.

The logic is straightforward. If analysts cannot inspect the work, they cannot build confidence in the work. If they cannot audit it, they cannot learn from it, improve their operations, or reuse that intelligence later. In that scenario, the organization is paying for work and generating intelligence, but the intelligence is not being retained in any reusable way.

This connects directly to the institutional knowledge problem. Teams already suffer when senior analysts leave and the know-how leaves with them. Opaque AI systems create the same problem in a different form. The work happens, the queue moves, and the organization gets no smarter.

Auditable, repeatable, inspectable. Those are not nice-to-haves in security operations. They are how trust gets built and how operational learning compounds over time.

How We Are Approaching the Problem Differently

We built Crogl around the failures Monzy describes, not around the marketing version of how SOCs are supposed to work.

The design assumes data will remain distributed and does not require full normalization as a prerequisite for value. It captures team knowledge as people work, so investigations build on each other instead of starting from scratch every time. It makes the work transparent and inspectable, so analysts can trust the output and the organization can learn from it. And it lets one analyst's investigation benefit the next analyst immediately, turning individual effort into team capability.

We filed a patent around the ability to investigate data regardless of where it lives or what schema it uses, because we built the company on the assumption that fragmentation is permanent. Systems need to operate successfully inside that reality, not against it.

What to Carry Out of Part 1

The takeaway from this part of the conversation is direct. Security operations is not broken because teams are failing. It is broken because the model no longer matches the environment. Centralization assumed a world that does not exist. Tool sprawl assumed analysts could keep an impossible amount of context in their heads. Alert growth assumed someone would always be hired to keep up. None of those assumptions hold anymore, and any honest conversation about SOC modernization has to start there.

Part 2 of the conversation goes deeper into what an alternative looks like in practice, including how teams can start to retain the operational knowledge they have been losing for years. Watch the full episode here and watch for Part 2 coverage soon.

See It in Your Environment.

Talk to the team.