Shadow Agents: The New Blind Spot for CISOs
A CISO I spoke with a few weeks ago told me something I have not been able to stop thinking about. He works for a mid-sized financial institution in the DACH region. ISO 27001 certified, NIS2 in progress, a security operations centre with 24/7 coverage. Not a newcomer.
"Last week we noticed that three teams are running agents internally. Through external SaaS providers. Nobody told us."
This is not an isolated case. According to the Pentera 2026 Report, 67 percent of security leaders do not know which AI models and agents are running in their organisation. At the same time, Gartner projects that by end of 2026, 40 percent of all enterprise applications will have AI agents built in. Deloitte finds that 79 percent of organisations are already deploying or testing Agentic AI. Only 21 percent have a mature governance model for it. And 88 percent have already experienced AI agent security incidents, according to Gravitee.
Both curves are moving in the wrong direction.
From Shadow IT to Shadow AI to Shadow Agents
This is not a new phenomenon. It is a progression I have observed over years.
Shadow IT was the problem of the 2010s. Business units bought SaaS services without asking IT. Dropbox, Salesforce, Slack: all arrived in organisations without IT approval first. It took around ten years for identity governance and SaaS Security Posture Management (SSPM) to become standard practice.
Shadow AI followed. Teams started calling GPT APIs directly, sending data to external models, building workflows around public LLMs. The data exfiltration question quickly appeared on the agenda. Many organisations now have policies for this. Not all of them yet.
Shadow Agents are the next step. And here lies the qualitative difference: agents do not just retrieve data. They act.
An agent writes to S3. It opens tickets. It calls APIs. It triggers workflows. If it has the relevant credentials, it can create, delete, or reconfigure resources. With identities that often fall outside classical IAM management, because they sit as a service account, API key, or OAuth token inside a SaaS product that the business unit set up itself.
This is not an AI problem. It is the old identity problem with an additional dimension: principals now act autonomously. And the time window we had with shadow IT to catch up on governance does not exist for shadow agents.
Why the Standard Answer Is Not Enough
When I raise this topic in client conversations, I often hear: "We have an AI policy." Good. But a policy that treats agents as a special case of "AI usage" misses the core issue.
The problem is on the identity side, not the AI side.
An agent is a principal. It has credentials. It has permissions. It executes actions that appear in audit logs. If this principal is not in your identity inventory, if nobody owns it, if its permissions have never been reviewed, then you have the classic problem of an unmanaged service account. The difference from before: this service account makes decisions. And it does so faster than any human.
I experienced this firsthand. During my SaaS project (52 days, approximately 292,000 net lines of code, built with AWS Kiro), I reached a point where several agents were running in parallel, each with their own IAM roles and permissions. What became clear to me: without explicit identity design, agents quickly become a black box to each other. Who wrote what where? Which agent accessed which resource? Without clean tagging and explicit principal identities, there is no reliable answer to that.
In a production system, in a regulated environment, in a C5 audit situation, that is a showstopper.
Three Steps CISOs Can Take Now
The governance problem with agents is solvable. But it requires a different approach than with software tools.
1. Agent Inventory by Identity, Not by Tool
The first step is an inventory of all running agents. Not catalogued by vendor or platform, but by principal: which agent has which credentials? Which permissions are associated with them? In what context does it run?
This sounds like a straightforward exercise. In practice it is often surprisingly difficult, because agents have arrived in the environment via different routes: some through central IT, some through business units, some as embedded features in SaaS products that someone activated six months ago.
Without an inventory there is no scope. Without scope there is no governance programme that works.
2. Every Agent Needs a Human Owner
This is the question I ask in every workshop and that rarely gets a satisfying answer: who is responsible when an agent does something wrong?
Not at the abstract level of "the product belongs to team X", but concretely: who reviews the permissions of this agent quarterly? Who gets the PagerDuty alert when it shows unexpected behaviour? Who gets called when it is part of a security incident?
This question must be answered before rollout. Not after. Anyone who does not define ownership before an agent goes live is pushing the problem to the worst possible moment: the first incident.
For AWS-native agents, this can be structured with tagging-based ownership metadata and IAM permission boundaries. For external agents via SaaS, this requires a corresponding field in the CMDB or GRC tool that asks the same question.
3. CloudTrail as the Audit Foundation for Agent Actions
Every action by an agent is an API call. And in AWS environments, every API call is recorded in CloudTrail.
That is the good news: the technical foundation for visibility exists. The bad news: it must be actively used. Recording CloudTrail is not the same as evaluating CloudTrail.
In combination with AWS Security Lake, agent actions can be analysed over time. Who accessed which resource when? Which access patterns are normal? What is an anomaly? This is the foundation for a genuine answer to the question: "Do we know what our agents are doing?"
And it is the foundation for the answer required in a C5:2026 audit on the question of AI system monitoring. Anyone not evaluating CloudTrail for agent actions today will need to do so by their next attestation at the latest.
The Identity Problem Comes Before the AI Problem
I have been working for months in TM Forum Catalysts and client projects on the question of how Agentic AI can act autonomously in enterprise environments without governance and compliance falling by the wayside. My observation: most of the challenges that arise are not AI-specific problems. They are identity problems that become more visible and more urgent through the autonomy of agents.
RBAC, least privilege, ownership, audit trail: these are concepts every CISO knows. They now need to be applied to principals that do not come from an HR database, that do not have an "onboarding" and "offboarding" process like humans, and that in extreme cases execute several thousand actions per day.
That is not a reason for panic. It is a reason for structure.
Agent sprawl is not a future problem. It is already here. The difference between well-run and overwhelmed organisations will not be whether they use agents. It will be whether they know which ones.
Are you currently working on agent governance frameworks or have you had similar experiences with shadow agents? I am happy to exchange notes in the comments or directly on LinkedIn.