Skip to main content
← Newsroom
April 20, 2026

Do Your Agents Know Your Secrets?

LL

Lipyeow Lim

Distinguished Engineer and Head of AI, Crogl

DS

Dominic Salas

Sr. Security Engineer, Crogl

Your AI SOC agent just disabled a compromised account, queried your SIEM, and rotated an API key.

Impressive.

But here's the uncomfortable question: where did it get the credentials to do that?


The Hidden Cost of Autonomous Agents

We've moved quickly from chatbots to agents: systems that don't just answer questions, but take action. In security operations, that means querying internal systems (SIEM, EDR, IAM), triggering workflows, isolating endpoints, and rotating credentials.

But autonomy comes with a requirement: agents need access. And access requires secrets.

API tokens. OAuth credentials. Service accounts. Keys to critical infrastructure.

At that point, every useful agent becomes something more than an assistant. It becomes a secrets-handling system.


Where Things Get Risky

In many architectures, these agents rely on third-party LLM providers. That creates a subtle, but critical, problem: secrets often cross the enterprise trust boundary.

They can end up embedded in prompts, passed into tool calls, stored in logs or traces, or retained in external systems. Even when handled carefully, the architecture itself introduces risk, and that risk maps directly to well-known failure modes.


This Isn't Theoretical: It Maps to OWASP Risks

Modern agentic systems align closely with emerging OWASP concerns.

Sensitive Data Exposure. Secrets can leak through prompts, tool inputs, or logs, especially when external services are involved.

Prompt Injection and Data Exfiltration. Malicious inputs can manipulate agents into revealing sensitive data, triggering unintended actions, or querying internal systems.

Excessive Agency. Agents are often over-privileged, with broad API access, persistent credentials, and limited scoping.

Insecure Tooling. APIs exposed as tools become attack surfaces through weak validation, unintended execution paths, and hallucinated actions.


A Step Forward: MCP and Structured Tooling

Protocols like MCP (Model Context Protocol) are a meaningful improvement. They keep secrets out of prompts, centralize credential handling, and allow agents to call tools without directly accessing secrets. This is a big step forward.

But it also changes the problem.


The Risk Moves: From Exposure to Invocation

With MCP-style architectures, the model doesn't see the secret. But it can still trigger actions that use it. That's not a flaw; it's inherent to any agent system. The question becomes: not "who can see the secret?" but "who can trigger its use?"

At that point, security depends on how tools are scoped, how permissions are enforced, and how actions are governed. In other words, security becomes a control and policy problem.


And Then There's the Human Factor

Of course, a user can always paste a secret into a prompt. No architecture can fully prevent that. But that's a different class of problem.

The distinction matters: user-driven disclosure is largely unavoidable. System-required exposure is not. The real issue is whether your system requires secrets to flow through the model in order to function. Many do.


A Better Model: Separation by Design

The safer approach is not to rely on the LLM to "handle secrets correctly." It's to ensure the LLM never has access to them in the first place.

This requires a strict separation: the LLM decides what should happen, and a controlled system decides how it happens. No credentials. No tokens. No secrets in the model context.


Crogl's Approach: Agents Without Secrets

Crogl builds on modern agent architectures like MCP, but enforces stronger guarantees around isolation and control.

Secrets are fully isolated. They're encrypted at rest and in transit, never exposed to the agent, and never included in prompts, tool payloads, or memory. Agents in Crogl are structurally incapable of accessing secrets.

Execution is controlled, not delegated. The agent determines intent. The Crogl platform executes actions. Credentials are applied only within a controlled execution layer. The LLM decides intent; Crogl enforces execution.

Access is governed by role-based controls. Agents operate with least-privilege tokens. Role-based access controls scope what actions are allowed. User data is isolated. Agents only act within their caller's context.

This doesn't eliminate all risks, but it ensures actions are bounded, access is intentional, and behavior is governable.


What This Actually Solves

No system can eliminate all risks in autonomous workflows. Crogl doesn't claim to. But architecture still matters.

Crogl removes an entire class of risk: no secrets in prompts, no secrets in LLM context, no secrets leaving the enterprise boundary, no programmatic access to credentials by the agent. What remains, like tool misuse or user error, is addressed through policy, permissions, governance, and a full audit trail where every agent action is logged and attributable. Where it belongs.


Rethinking "Autonomous"

Autonomy shouldn't mean "agents with full access to everything." It should mean "agents operating through controlled, secure capabilities."


Final Thought

If your agents need to know your secrets, that's a design choice. If your users tell them anyway, that's a different problem.

But the safest architecture is still the simplest: the safest secret is the one your agent never sees.

Talk to the team.