← Newsroom
March 20, 2026

AI Agents Are Not Enough

LL

Lipyeow Lim

Distinguished Engineer and Head of AI, Crogl

AI agents being poured into a system — illustration of intelligence allocation

There's a growing belief in security today that the answer to AI-powered threats is simple: deploy more AI agents. It's an appealing idea, fight AI with AI. But it's also incomplete. Because the real problem isn't just capability. It's coordination, cost, and control. And without addressing those, "just add agents" becomes another scaling failure.

The SOC Was Built for a Different Era

Security Operations Centers (SOCs) were designed for a world where attacks unfolded at human speed. That world is gone.

AI has fundamentally changed the economics and tempo of cyber attacks:

  • Exploits can be generated and adapted in minutes
  • Reconnaissance scales cheaply and continuously
  • Payloads mutate to evade detection
  • Entire campaigns can be orchestrated autonomously

What used to take weeks now takes hours, or less. The modern attacker operates at machine speed. The SOC does not.

Why Traditional Automation Breaks

To keep up, many SOCs leaned into automation, especially deterministic playbooks (SOAR). But these systems have a fatal flaw: they only work for problems we already understand.

The moment an attacker deviates from known patterns, deterministic workflows break. And when they break, humans step in:

  • Analysts manually triage alerts
  • Investigations rely on deep, scarce expertise
  • Response times slow down

This creates a bottleneck, and in an AI-driven threat landscape, bottlenecks are where you lose.

Why "Just Add AI Agents" Doesn't Work

So the next move seems obvious: replace brittle automation with AI agents. But this introduces a new set of problems.

AI agents are:

  • Non-deterministic, outcomes can vary
  • Expensive, token usage scales quickly
  • Overkill for simple tasks, not every problem needs reasoning

If you deploy agents everywhere, you trade one bottleneck (humans) for another (cost and unpredictability). This isn't a tooling upgrade. It's a systems problem.

The Real Shift: From Tools to Intelligence Allocation

The key insight is that not all security problems are equal. Some are well-understood and repeatable. Others are ambiguous and novel. These require fundamentally different approaches:

  • Low uncertainty: deterministic automation
  • High uncertainty: agentic reasoning

The mistake is treating everything like it belongs in one category. What's needed instead is dynamic allocation of intelligence, matching the right approach to the right problem.

Agents as Discovery, Not Just Execution

Here's where things get more interesting. AI agents shouldn't just handle complex problems, they should learn from them.

When an agent investigates a novel threat, it produces something valuable: patterns. Over time, those patterns can be:

  • Codified into rules
  • Transformed into deterministic workflows
  • Used to eliminate future uncertainty

In other words, agentic reasoning is how you discover what should later be automated. This creates a feedback loop:

  1. Agents explore the unknown
  2. Patterns emerge
  3. Automation absorbs the known
  4. Agents move on to the next frontier

Why You Need a Meta-Level AI System

But this loop doesn't run itself. Someone, or something, has to decide:

  • When to use an agent vs. automation
  • How to manage cost vs. accuracy
  • Which patterns are worth codifying
  • How the system evolves over time

This is the missing layer. Not more agents. But a meta-level AI system that governs and optimizes them. A system that:

  • Allocates intelligence dynamically
  • Controls resource usage
  • Learns across investigations
  • Continuously reshapes the boundary between automation and reasoning

The Future SOC Is a Self-Optimizing System

The end state isn't a SOC full of agents. It's a SOC that gets better every time it encounters something new. Where:

  • Humans are no longer the primary bottleneck
  • Agents are used precisely where they add value
  • Automation expands continuously
  • And the entire system adapts at machine speed

AI agents are powerful. But without a system to govern them, they don't scale. The real breakthrough isn't agentic AI. It's AI that manages AI.

Talk to the team.