Agent-native article available: AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It YetAgent-native article JSON available: AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It Yet
AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It Yet

AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It Yet

By the end of 2026, 40% of enterprise applications will include AI agents with specific tasks. Twelve months ago, that figure was below 5%. The leap is not just statistical — it is structural.

Francisco TorresFrancisco TorresMay 6, 20267 min
Share

AI Agents Are Already Inside Your Systems and Your Identity Strategy Still Doesn't Know It

By the end of 2026, 40% of enterprise applications will include AI agents with specific tasks. Twelve months ago, that figure was below 5%. The leap is not merely statistical: it is structural. Millions of non-human identities are operating right now on corporate networks with access to data, systems, and decisions, and the majority of security teams are still looking at the problem with the wrong instruments.

Identity and access management — what the industry calls IAM — was built for a world where the actors in the system were people. Someone comes in, is assigned a role, has their access reviewed periodically, and eventually gets offboarded. The cycle has human logic because it was designed for humans. AI agents do not come in through human resources, they do not have a manager to approve their permissions, and they do not have a scheduled termination date. But they do have access. And that access, in the majority of organisations, is not governed with the same rigour applied to any new employee.

This is not a minor technical problem. It is a structural crack in how companies understand who — or what — operates inside their systems.

The Inventory Nobody Has

Before talking about controls, there is a more basic question that few organisations can answer with precision: how many AI agents are running in their environments right now, who deployed them, and what can they do.

The uncomfortable answer is that most do not know. According to data published by Gravitee, only one in seven AI agents operating in production environments received a formal review from the security team before being deployed. The rest were launched by business or development teams driven by operational urgency, without going through the same filters applied to any new system. The result is an ecosystem of non-human identities that accumulate permissions without audit, operate under shared credentials, and remain active long after the workflow that originated them has changed or disappeared.

The problem is not new in conceptual terms. Non-human identities — service accounts, API keys, automation scripts — already outnumbered human users in the majority of large enterprises before AI agents entered the picture. What changed is the speed and the autonomy. A Kubernetes cluster can provision thousands of service accounts in minutes. An AI agent can interact with multiple systems simultaneously, make decisions in real time, and modify its behaviour according to context. That is not a passive service account waiting for an instruction. It is an actor with its own capacity for judgment inside your systems.

Without a clear inventory of which agents exist, what access they have, and who is accountable for them, any conversation about controls comes after the problem. You cannot govern what you have not catalogued.

What a Breach Looks Like When the Actor Is a Machine

The case of Salesloft and Drift, which occurred last year, illustrates with precision the type of risk that emerges when identity controls fail to reach AI integrations. Attackers compromised OAuth tokens linked to the Drift chatbot — an AI integration used by Salesloft — and accessed the Salesforce environments of more than 700 organisations. The breach went unnoticed for days.

The detail that matters is not technical but operational: the security team could see that the chatbot had access. What they could not see was what it was doing with that access in real time. From the outside, the malicious queries were indistinguishable from the bot's legitimate behaviour. It was a trusted non-human identity doing exactly what it appeared it was supposed to do.

That pattern — visible access, invisible behaviour — is the core of the problem. Traditional IAM frameworks were built to answer the question of who has access to what. When it comes to AI agents, the question that matters is what that access is doing at every moment, under what context, and for what purpose. These are different questions and they require different instruments.

The static role-based control model — you assign a role, the role defines the permissions, the permissions are reviewed every quarter — was not designed for actors that operate at machine speed and modify their behaviour according to context. You need continuous risk evaluation, not periodic auditing. You need access to expire automatically when the task ends, not to persist indefinitely because nobody revoked it.

The principles have existed for a long time. Least privilege, just-in-time access, ephemeral tokens that expire on their own, integration with privileged access management platforms. None of these mechanisms is new. What is new is the urgency of extending them to a class of identities for which they were not originally conceived, and doing so before the next incident appears in the quarterly report.

What Organisations Are Postponing and Why That Postponement Has a Cost

There is a reason why security teams have not extended their IAM frameworks to AI agents with the same speed at which those agents are being deployed: deployment is driven by business teams and security teams respond afterwards.

The asymmetry is structural. A product or operations team that finds in an AI agent a way to automate a workflow has no incentive to stop and request a security review that may take weeks. Their incentive is the immediate operational result. The cost of not doing so — a breach, unauthorised access, a compromised agent — is paid by another team, later, under a different budget.

That distribution of incentives produces exactly the chaotic inventory described above: dozens or hundreds of agents running in production, many without a formal owner, with permissions that were never reviewed and credentials that nobody knows when they expire.

The solution is not to slow down the deployment of agents. The productivity gains are real and organisations that fall behind will pay that cost in other ways. The solution is to integrate identity governance into the deployment process, not as a subsequent step but as a prior condition. That no agent enters production without someone having answered three basic questions: what it has access to, who is accountable for that access, and under what conditions that access expires.

Gartner identified the lack of governance over AI agent identities as one of the most critical cybersecurity trends for 2026. Not because it is a new problem in its logic, but because the speed of adoption is outpacing the speed of controls. The gap between the two is where incidents live.

The Missing Governor in the Race Toward Operational AI

The dominant narrative about AI in enterprises is centred on capability: what an agent can do, how much time it saves, how many processes it automates. It is a legitimate narrative. The productivity numbers are real.

What that narrative leaves out is the question of who is accountable when something goes wrong. And when the actor that fails is not an employee but an agent with access to multiple systems, the question becomes much harder to answer.

The cost reduction from breaches that AI frameworks in IAM promise — up to 80% according to some industry studies — does not arrive on its own. It arrives when someone decided that AI agents are identity decisions before they are engineering decisions. When the security team has real-time visibility into the behaviour of every agent, not just their static permissions. When access expires automatically and attestation flows are continuous, not annual.

Organisations that are deploying AI agents without that level of governance are not being reckless out of ignorance. They are being reckless because the pressure to move fast is real and the right controls require investment, coordination, and deliberate friction in deployment processes.

That friction, well designed, does not slow adoption. It makes adoption sustainable. The difference between an AI programme that scales in an orderly manner and one that produces a major incident in eighteen months is not in the quality of the models nor in the ambition of the use cases. It is in whether someone had the conversation about identities before the first agent arrived in production.

Share

You might also like