Skip to main content
← Resources
May 12, 2026

Why Your AI SOC Needs Some Error in It (And Three Questions Every CISO Should Answer First)

Why Your AI SOC Needs Some Error in It (And Three Questions Every CISO Should Answer First)

Monzy Merza, Crogl's Founder and CEO, joined Rachael Lyon and Jonathan Knepher on Forcepoint's To the Point Cybersecurity podcast for a conversation that covered a lot of ground: why AI confidence is built through transparency, not perfection; why SOC headcount will grow rather than shrink; and the three questions every CISO has to answer before they can evaluate an AI tool.


A Transparent System Is More Trustworthy Than a Perfect One

When asked how SOC teams can build confidence in AI outputs, Monzy made a point that cuts against most vendor messaging: some error is not a bug, it is a feature.

If you remove all error from a probabilistic system, you end up with a rule-based system. Rule-based systems cannot surface unexpected connections or prompt analysts in directions they had not anticipated. The value of AI in investigation comes precisely from the fact that it does not operate on fixed rules alone.

The answer to the confidence question is not perfection. It is transparency. If the system is inspectable and auditable, analysts can go back, verify the work, and build confidence over time. That is why auditability is not a compliance checkbox for Crogl. It is core to how the system earns trust.

What that means in practice: organizations should ask their AI providers to document the conditions under which errors occur and how to handle them. An AI system that explains its reasoning is one that analysts can trust and correct. One that cannot be inspected cannot be improved.


The Alert Explosion Is Coming. The Headcount Argument Is Wrong.

Monzy named a pattern worth paying attention to: the cost to launch a sophisticated attack campaign has dropped by an order of magnitude. Attackers no longer need a team of specialists across multiple domains. An agentic system can cover that ground faster and cheaper.

The result is more volume, more complexity, and a changing footprint of what alerts even look like. The kinds of violations organizations will face in the next 12 to 24 months are ones they have not seen yet. The policies and detection logic to handle them do not exist yet.

The conventional response to that pressure is to assume AI will shrink the team. Monzy's argument is the opposite: the number of people required in security operations will increase. The job will change. Analysts will stop working through backlogs of low-signal alerts. But the demand for people who can evaluate AI outputs, manage institutional knowledge, and make good decisions on hard problems is not going away. It is growing.

Crogl handles the investigation. The analyst makes the call.


Three Questions Every CISO Must Answer

Monzy's framing for CISOs evaluating AI SOC tools:

1. Do you actually want to leave your current state?

Not a soft question. The data on alert volume, analyst capacity, and the cost of manual triage is unambiguous. If the answer is not a clear yes, the rest of the evaluation is academic.

2. How do you want to operate in the future state?

Do you want a SaaS service, or do you want to own and control the system? This is not a preference question. It is a requirements question. Crogl is built for organizations that need customer-managed deployment: on-premises, in their own cloud, or fully air-gapped. That is a deliberate architectural choice, not a limitation. It is also why Crogl is not the right answer for every organization. If you need a SaaS delivery model, you should know that before you start evaluating.

3. How are you going to grow your team to get the outcomes you want?

Tooling changes the mechanism, not the need for people who understand the environment, make decisions, and hold accountability. Organizations that plan to automate their way out of staffing problems are going to find out the hard way that institutional knowledge does not transfer to an AI system automatically.

A fourth point: be skeptical of building this yourself. Most organizations cannot build a production-grade AI investigation system that keeps pace with a space moving as fast as this one. The teams that are succeeding are the ones who find the right partners and influence the roadmap directly.


About the Podcast

To the Point Cybersecurity is produced by Forcepoint and hosted by Rachael Lyon and Jonathan Knepher. The show covers global cybersecurity trends, technology, and strategy for practitioners and security leaders. Listen here.

Download Crogl free.