AI Is the Best Thing That Ever Happened to Threat Actors. It Can Be the Same for Defenders.
AI Is the Best Thing That Ever Happened to Threat Actors. It Can Be the Same for Defenders.
There is a version of the AI moment that looks like pure progress: faster products, leaner teams, code that writes itself. And then there is the version that keeps security leaders up at night.
Both versions are real. And they are happening at the same time.
Ryan Burke, VP Worldwide Sales at Crogl, recently joined a conversation about what AI is actually doing to enterprise cyber risk. Not the theoretical version. The operational one, measured in alerts per day, patch windows that break production systems, and internal tools that quietly become business-critical before anyone realizes they were never properly secured.
Two Ways AI Is Changing the Threat Landscape
The conversation started with an honest framing: AI is reshaping cybersecurity from two directions at once, and most organizations are only paying attention to one of them.
The external threat is accelerating. Threat actors are using AI to identify vulnerabilities faster, chain attacks together with less manual effort, and automate activity at scale. Social engineering, which has always depended on convincing communication, is getting more convincing. Phishing messages are harder to spot. Deepfakes are more accessible. Reconnaissance that once took days can happen in hours.
The concern around increasingly capable AI models, including Anthropic's frontier work, is as much economic as it is technical. Highly skilled threat researchers could already do many of these things. What changes with AI is the access curve. Capabilities that required serious expertise now require far less. The barrier to entry drops, and the population of capable attackers grows.
The internal threat is quieter but spreading fast. The second risk is the one enterprises are creating for themselves. Companies are deploying AI systems before they are mature, generating production code through AI tools without adequate review, and building agentic workflows that carry implicit permissions no one explicitly designed. The misconfiguration surface is expanding faster than most security teams can audit it.
This is the part of the AI conversation that tends to get skipped in boardrooms: not just what attackers can do with AI, but what your own developers, operations teams, and business analysts are doing with it right now.
The Operational Problem Is Already Bad
Before AI entered the picture, most enterprise security organizations were already under water. Patch management is a useful example. The conventional answer to vulnerabilities is: patch them. The operational reality is that patching takes time, requires testing, and carries its own risk. A critical update to a system running core business logic can break that system. The security team knows it. The business does not always want to hear it. Patches get delayed. The window stays open.
Now add AI-generated code and internal automation tools into that environment. Security teams are contending with a category of risk that is increasingly difficult to inventory. Someone in finance builds a workflow using a no-code AI tool. It works. Other teams start depending on it. Six months later it is business-critical, it has API access it was never formally granted, and nobody owns the security posture of it. Call it "vibe-coded" infrastructure, call it shadow IT with a new face. Either way, it represents a growing layer of operational exposure that does not appear on most asset inventories.
Meanwhile, the alert volume problem is not getting better. Security teams are receiving thousands of alerts per day, operating across multiple SIEMs, multiple SOARs, and tools that do not share a common schema. Analysts are triaging noise, not investigating signal. The best talent gets burned out doing repetitive, low-judgment work. The real threats move through the gaps.
What a Practical Defensive Response Looks Like
Cybersecurity has always been about managing risk, not eliminating it. That frame matters more than ever in an AI environment, because the instinct to demand certainty before acting is a losing strategy. Risk will not wait.
The practical question is: where can organizations concentrate their limited defensive capacity to have the most impact?
This is where AI can genuinely help defenders. The noise problem, the alert volume problem, the analyst attention problem — these are solvable with the right application of AI to the triage layer. Tools that can reason through an investigation, surface the relevant signal, document what they found, and hand a coherent picture to a human analyst change the economics of what a security team can cover.
That is the design premise behind Crogl. Security teams dealing with tens of thousands of alerts per month, complex multi-tool environments, and fragmented data do not need another layer of dashboards. They need something that can investigate on their behalf and show its work. The analyst makes the call. The machine handles the volume.
The goal is not to replace the human judgment at the center of security operations. It is to make sure that judgment is applied to the situations that actually require it, instead of being exhausted on the routine work that precedes them.
The Enterprise Question No One Can Dodge
Public markets are paying close attention to every AI product release right now, in part because investors are trying to figure out what AI does to the long-term defensibility of software businesses. That uncertainty is real and worth watching.
But the more immediate enterprise question is not whether AI is coming. It is already here. The question is whether AI adoption inside the organization is creating invisible security debt, and whether the security function has the operational capacity to detect and respond to what is being built.
The teams that get ahead of this are not necessarily the ones with the biggest budgets. They are the ones that have found a way to keep human judgment in the loop without drowning analysts in the volume of work that precedes it.
AI is a better attack surface and a better defense capability. The organizations that recognize both of those things at once, and build for both, will be better positioned than the ones waiting for the picture to get cleaner before they act.
The picture is not getting cleaner. That is the point.