Skip to main content
← Resources
May 16, 2026

Building an AI Knowledge Engine for the SOC: Monzy Merza on Security Unfiltered

Monzy Merza, Crogl's Founder and CEO, joined Joe South on the Security Unfiltered podcast for a wide-ranging conversation about what an AI system has to actually do to be useful inside a real SOC. Not the demo version. The one that runs against fragmented data, multiple model providers, regulated environments, and the alerts a SIEM cannot auto-resolve.

The conversation moved through Crogl's neurosymbolic architecture and Bring Your Own Model as a deployment requirement. It also covered why pricing security AI by the investigation is structurally backwards, and where the analyst sits in a workflow where the system is doing the investigative work.

Most AI in Security Is Built for the Questions People Already Know How to Ask

The conversation opened on a pattern that recurs across the AI SOC market: most of what gets called AI for security operations is, in practice, a chat window bolted onto a search bar. It accelerates the analyst who already knows which alert to open, which source to query, and which field to inspect. It does almost nothing for the 63% of alerts that go uninvestigated each day, because those alerts require evidence from multiple sources, in different schemas, against an environment no foundation model has ever seen.

Monzy made the point that the constraint is not model intelligence. It is grounding. A general-purpose LLM does not know which user is normal for which asset, how a given environment names its fields, or what relationships exist between alerts that appeared on different systems an hour apart. Without that grounding, the system cannot run a real investigation. It can only summarize what someone hands it.

That framing comes directly from Crogl's founding question: what if one analyst could be as effective as the entire team? Monzy was clear that this is not a literal promise. It is a forcing function for product design. The honest answer is that you do not get there without solving the grounding problem first.

The Neurosymbolic Architecture, in Practitioner Terms

A significant section of the conversation focused on what Crogl actually is underneath. Monzy described it as a neurosymbolic system, and unpacked what that means in operational terms.

The neural side is the language model layer plus a live knowledge graph of the customer environment. The graph maps users, assets, behavioral baselines, prior events, and the relationships among them. It learns the fields and what they mean. One source calls a field src_IP. Another calls it source_IP. A third dropped it into a nested JSON object three years ago. The graph holds the mapping so the rest of the system can act on it.

The symbolic side is the part that keeps execution predictable. Skills, which Crogl previously called response plans, are rules-based investigation procedures that define what gets queried, in what order, against which sources, and how the result is recorded. Extensions are integrations governed by defined logic. Together they enforce process consistency. They do not deviate. They do not improvise. They are, in the original sense of the word, an engine.

The two layers run together on every investigation. The neural layer adapts to alerts no playbook anticipated. The symbolic layer keeps the work auditable, repeatable, and bounded. Monzy used the Volt Typhoon advisory as a worked example. Crogl can ingest a long government advisory and extract the indicators and investigative elements. It determines which customer-specific data sources are relevant, runs the corresponding queries in their native formats, and produces a consolidated report describing current impact. No schema normalization step beforehand. No data movement.

Bring Your Own Model Is a Deployment Requirement, Not a Feature

Joe asked about the model strategy directly. Monzy was clear that Crogl does not train its own frontier large language model and has no intention of trying. The foundation model labs are very good at this, and they are improving faster than any security vendor could keep up with by building their own.

What customers need instead is flexibility. Some have a preferred provider. Some have policy or sovereignty constraints that rule out a particular vendor. Some run open-weight models in their own infrastructure because the data cannot leave the environment. Some change their minds when a better model ships next quarter. Bring Your Own Model means Crogl works with OpenAI, Anthropic, open-weight models like Llama-class systems, and models hosted on platforms like Amazon Bedrock. The orchestration and the grounding layer carry the durable value. The model is a component customers can swap.

This is also why air-gapped deployment is not a feature footnote in Crogl's positioning. The same investigation capability runs on-premises, in customer cloud, and in fully air-gapped environments. The 45% of SOCs that already protect an air-gapped network are not a fringe segment. They are mainstream, and most AI SOC tooling is not built to reach them.

Pricing That Does Not Punish Use

The portion of the conversation Monzy and Joe spent the most time on, somewhat unexpectedly, was pricing. The position is direct.

Crogl does not charge by the number of investigations. It does not charge by alert volume. It does not charge by user. It does not charge by data volume analyzed. Crogl charges by team.

Monzy's reasoning was structural. If a security product is priced per investigation or per unit of activity, the user starts second-guessing whether to run it. That is exactly the wrong incentive for a security operations tool. The whole point of the product is that it gets used on the alerts no one had time for, including the ones that turn out to be nothing. A pricing model that penalizes that behavior defeats its own product.

Team-based pricing produces budget predictability for the buyer and multi-year planning confidence for the customer. It also aligns Crogl's incentives with the customer's. If a customer is using their own model provider, they pay that provider's token costs directly. Crogl still has every reason to make the system token-efficient, because efficient use is faster use, and faster investigations are better investigations. The product does not need investigation-based pricing to capture that value.

Where the Analyst Sits in This Workflow

Monzy and Joe closed on the question that comes up in nearly every AI SOC conversation: what does this mean for the analyst?

Monzy's framing was unchanged from his other public commentary. AI in the SOC is a capacity argument, not a replacement argument. The Ponemon Institute's 2026 State of SecOps survey of 649 practitioners found that 57% cite freeing up analyst bandwidth as a primary benefit of AI in the SOC. The judgment layer stays with the human. The drudgery moves to the system.

The work that precedes judgment is what an AI knowledge engine for the SOC actually does. The work that requires judgment is what analysts do. Those two things sit at different points in the same workflow.

About the Podcast

Security Unfiltered is hosted by Joe South, a cybersecurity practitioner with over a decade of experience across several domains of security. The show covers modern cybersecurity topics for working practitioners and people building careers in the field. Listen here.

Download Crogl free.