{"version":"1.0","type":"agent_native_article","locale":"en","slug":"ai-agents-inside-enterprise-systems-identity-strategy-mou2fyht","title":"AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It Yet","primary_category":"ai","author":{"name":"Francisco Torres","slug":"francisco-torres"},"published_at":"2026-05-06T12:02:39.544Z","total_votes":84,"comment_count":0,"has_map":true,"urls":{"human":"https://sustainabl.net/en/articulo/ai-agents-inside-enterprise-systems-identity-strategy-mou2fyht","agent":"https://sustainabl.net/agent-native/en/articulo/ai-agents-inside-enterprise-systems-identity-strategy-mou2fyht"},"summary":{"one_line":"Enterprise AI agents are proliferating faster than identity governance frameworks can track them, creating a structural security gap that traditional IAM was never designed to handle.","core_question":"How should organizations govern the identities and access rights of AI agents that are already operating inside their systems without adequate oversight?","main_thesis":"The rapid deployment of AI agents in enterprise environments has outpaced identity and access management frameworks built for human actors, creating an ungoverned class of non-human identities with real access to critical systems — and the solution is integrating identity governance into deployment as a precondition, not an afterthought."},"content_markdown":"## AI Agents Are Already Inside Your Systems and Your Identity Strategy Still Doesn't Know It\n\nBy the end of 2026, 40% of enterprise applications will include AI agents with specific tasks. Twelve months ago, that figure was below 5%. The leap is not merely statistical: it is structural. Millions of non-human identities are operating right now on corporate networks with access to data, systems, and decisions, and the majority of security teams are still looking at the problem with the wrong instruments.\n\nIdentity and access management — what the industry calls IAM — was built for a world where the actors in the system were people. Someone comes in, is assigned a role, has their access reviewed periodically, and eventually gets offboarded. The cycle has human logic because it was designed for humans. AI agents do not come in through human resources, they do not have a manager to approve their permissions, and they do not have a scheduled termination date. But they do have access. And that access, in the majority of organisations, is not governed with the same rigour applied to any new employee.\n\nThis is not a minor technical problem. It is a structural crack in how companies understand who — or what — operates inside their systems.\n\n## The Inventory Nobody Has\n\nBefore talking about controls, there is a more basic question that few organisations can answer with precision: how many AI agents are running in their environments right now, who deployed them, and what can they do.\n\nThe uncomfortable answer is that most do not know. According to data published by Gravitee, only one in seven AI agents operating in production environments received a formal review from the security team before being deployed. The rest were launched by business or development teams driven by operational urgency, without going through the same filters applied to any new system. The result is an ecosystem of non-human identities that accumulate permissions without audit, operate under shared credentials, and remain active long after the workflow that originated them has changed or disappeared.\n\nThe problem is not new in conceptual terms. Non-human identities — service accounts, API keys, automation scripts — already outnumbered human users in the majority of large enterprises before AI agents entered the picture. What changed is the speed and the autonomy. A Kubernetes cluster can provision thousands of service accounts in minutes. An AI agent can interact with multiple systems simultaneously, make decisions in real time, and modify its behaviour according to context. That is not a passive service account waiting for an instruction. It is an actor with its own capacity for judgment inside your systems.\n\nWithout a clear inventory of which agents exist, what access they have, and who is accountable for them, any conversation about controls comes after the problem. You cannot govern what you have not catalogued.\n\n## What a Breach Looks Like When the Actor Is a Machine\n\nThe case of Salesloft and Drift, which occurred last year, illustrates with precision the type of risk that emerges when identity controls fail to reach AI integrations. Attackers compromised OAuth tokens linked to the Drift chatbot — an AI integration used by Salesloft — and accessed the Salesforce environments of more than 700 organisations. The breach went unnoticed for days.\n\nThe detail that matters is not technical but operational: the security team could see that the chatbot had access. What they could not see was what it was doing with that access in real time. From the outside, the malicious queries were indistinguishable from the bot's legitimate behaviour. It was a trusted non-human identity doing exactly what it appeared it was supposed to do.\n\nThat pattern — visible access, invisible behaviour — is the core of the problem. Traditional IAM frameworks were built to answer the question of who has access to what. When it comes to AI agents, the question that matters is what that access is doing at every moment, under what context, and for what purpose. These are different questions and they require different instruments.\n\nThe static role-based control model — you assign a role, the role defines the permissions, the permissions are reviewed every quarter — was not designed for actors that operate at machine speed and modify their behaviour according to context. You need continuous risk evaluation, not periodic auditing. You need access to expire automatically when the task ends, not to persist indefinitely because nobody revoked it.\n\nThe principles have existed for a long time. Least privilege, just-in-time access, ephemeral tokens that expire on their own, integration with privileged access management platforms. None of these mechanisms is new. What is new is the urgency of extending them to a class of identities for which they were not originally conceived, and doing so before the next incident appears in the quarterly report.\n\n## What Organisations Are Postponing and Why That Postponement Has a Cost\n\nThere is a reason why security teams have not extended their IAM frameworks to AI agents with the same speed at which those agents are being deployed: deployment is driven by business teams and security teams respond afterwards.\n\nThe asymmetry is structural. A product or operations team that finds in an AI agent a way to automate a workflow has no incentive to stop and request a security review that may take weeks. Their incentive is the immediate operational result. The cost of not doing so — a breach, unauthorised access, a compromised agent — is paid by another team, later, under a different budget.\n\nThat distribution of incentives produces exactly the chaotic inventory described above: dozens or hundreds of agents running in production, many without a formal owner, with permissions that were never reviewed and credentials that nobody knows when they expire.\n\nThe solution is not to slow down the deployment of agents. The productivity gains are real and organisations that fall behind will pay that cost in other ways. The solution is to integrate identity governance into the deployment process, not as a subsequent step but as a prior condition. That no agent enters production without someone having answered three basic questions: what it has access to, who is accountable for that access, and under what conditions that access expires.\n\nGartner identified the lack of governance over AI agent identities as one of the most critical cybersecurity trends for 2026. Not because it is a new problem in its logic, but because the speed of adoption is outpacing the speed of controls. The gap between the two is where incidents live.\n\n## The Missing Governor in the Race Toward Operational AI\n\nThe dominant narrative about AI in enterprises is centred on capability: what an agent can do, how much time it saves, how many processes it automates. It is a legitimate narrative. The productivity numbers are real.\n\nWhat that narrative leaves out is the question of who is accountable when something goes wrong. And when the actor that fails is not an employee but an agent with access to multiple systems, the question becomes much harder to answer.\n\nThe cost reduction from breaches that AI frameworks in IAM promise — up to 80% according to some industry studies — does not arrive on its own. It arrives when someone decided that AI agents are identity decisions before they are engineering decisions. When the security team has real-time visibility into the behaviour of every agent, not just their static permissions. When access expires automatically and attestation flows are continuous, not annual.\n\nOrganisations that are deploying AI agents without that level of governance are not being reckless out of ignorance. They are being reckless because the pressure to move fast is real and the right controls require investment, coordination, and deliberate friction in deployment processes.\n\nThat friction, well designed, does not slow adoption. It makes adoption sustainable. The difference between an AI programme that scales in an orderly manner and one that produces a major incident in eighteen months is not in the quality of the models nor in the ambition of the use cases. It is in whether someone had the conversation about identities before the first agent arrived in production.","article_map":{"title":"AI Agents Are Already Inside Your Systems and Your Identity Strategy Doesn't Know It Yet","entities":[{"name":"Gravitee","type":"company","role_in_article":"Source of data on AI agent security review rates in production environments"},{"name":"Salesloft","type":"company","role_in_article":"Organization whose AI integration was compromised in the cited breach case"},{"name":"Drift","type":"product","role_in_article":"AI chatbot integration whose OAuth tokens were compromised, enabling access to 700+ Salesforce environments"},{"name":"Salesforce","type":"product","role_in_article":"Platform whose customer environments were accessed via the Drift/Salesloft breach"},{"name":"Gartner","type":"institution","role_in_article":"Cited as identifying AI agent identity governance as a top cybersecurity trend for 2026"},{"name":"Kubernetes","type":"technology","role_in_article":"Example of infrastructure that can provision thousands of service accounts in minutes, illustrating scale of non-human identity proliferation"},{"name":"IAM (Identity and Access Management)","type":"technology","role_in_article":"Existing governance framework built for human actors that is structurally inadequate for AI agent identities"},{"name":"OAuth","type":"technology","role_in_article":"Authentication protocol whose tokens were compromised in the Salesloft/Drift breach"},{"name":"Francisco Torres","type":"person","role_in_article":"Author of the article"}],"tradeoffs":["Speed of AI agent deployment vs. rigor of identity governance review — business teams optimize for the former, security teams for the latter","Operational productivity gains from ungoverned agents vs. tail risk of a breach that compromises hundreds of downstream organizations","Friction in deployment processes (governance as precondition) vs. adoption velocity — the article argues well-designed friction enables rather than blocks scale","Periodic auditing (low cost, low coverage) vs. continuous real-time behavioral monitoring (higher cost, necessary for machine-speed actors)","Shared credentials for AI agents (operationally convenient) vs. individual scoped credentials (governable but more complex to manage)"],"key_claims":[{"claim":"By end of 2026, 40% of enterprise applications will include AI agents with specific tasks, up from under 5% twelve months prior.","confidence":"high","support_type":"reported_fact"},{"claim":"Only 1 in 7 AI agents in production environments received a formal security review before deployment, according to Gravitee data.","confidence":"high","support_type":"reported_fact"},{"claim":"Attackers compromised OAuth tokens linked to Drift (a Salesloft AI integration) and accessed Salesforce environments of more than 700 organizations.","confidence":"high","support_type":"reported_fact"},{"claim":"Non-human identities already outnumbered human users in most large enterprises before AI agents entered the picture.","confidence":"medium","support_type":"reported_fact"},{"claim":"AI frameworks in IAM can reduce breach costs by up to 80%, according to some industry studies.","confidence":"medium","support_type":"reported_fact"},{"claim":"Gartner identified lack of governance over AI agent identities as one of the most critical cybersecurity trends for 2026.","confidence":"high","support_type":"reported_fact"},{"claim":"The core problem with AI agent security is not visible access but invisible behavior — malicious activity is indistinguishable from legitimate agent operation.","confidence":"high","support_type":"inference"},{"claim":"Organizations deploying agents without identity governance are not ignorant of the risk — they are responding rationally to deployment pressure incentives.","confidence":"medium","support_type":"editorial_judgment"}],"main_thesis":"The rapid deployment of AI agents in enterprise environments has outpaced identity and access management frameworks built for human actors, creating an ungoverned class of non-human identities with real access to critical systems — and the solution is integrating identity governance into deployment as a precondition, not an afterthought.","core_question":"How should organizations govern the identities and access rights of AI agents that are already operating inside their systems without adequate oversight?","core_tensions":["Human-centric IAM frameworks vs. machine-speed, autonomous AI actors that do not fit human identity lifecycle models","Organizational pressure to deploy AI fast vs. the structural requirement to govern identities before deployment","Visibility of access (what traditional IAM provides) vs. visibility of behavior (what AI agent security requires)","Individual team incentives (deploy now, gain productivity) vs. organizational risk (breach costs paid collectively and later)","Treating AI agents as engineering decisions vs. treating them as identity and governance decisions first"],"open_questions":["What tooling or standards will emerge specifically for AI agent identity lifecycle management, distinct from human IAM?","How should accountability be assigned when an AI agent causes a breach — the deploying team, the security team, the vendor?","At what point does the inventory problem become unmanageable — is there a scale threshold beyond which retroactive governance is impossible?","Will regulatory frameworks (e.g., EU AI Act, SEC cybersecurity rules) eventually mandate AI agent identity governance, and on what timeline?","How do organizations govern AI agents deployed by third-party vendors or SaaS integrations they do not directly control?","Can behavioral monitoring of AI agents be automated at the scale required, or does it require human review that cannot keep pace with agent proliferation?"],"training_value":{"recommended_for":["CISOs and security architects evaluating AI agent risk","CTOs and engineering leaders designing AI agent deployment pipelines","Enterprise risk and compliance officers assessing AI governance gaps","Product and operations leaders deploying AI agents who need to understand security obligations","Investors or analysts evaluating enterprise AI security posture","Business strategists advising on AI transformation programs"],"when_this_article_is_useful":["When advising an enterprise on AI adoption strategy that includes agentic workflows","When evaluating cybersecurity posture of an organization deploying AI agents","When designing deployment processes for AI agents in regulated or data-sensitive environments","When assessing the risk profile of SaaS integrations that include AI agent components","When building the business case for investment in non-human identity management tooling","When a security team needs to explain to business stakeholders why governance preconditions matter"],"what_a_business_agent_can_learn":["How to frame AI agent deployment as an identity governance decision, not just an engineering or productivity decision","The specific failure mode of 'visible access, invisible behavior' and why it requires different controls than traditional IAM","How incentive misalignment between deploying teams and security teams produces systemic ungoverned risk","The three minimum governance questions every AI agent deployment should answer: what it accesses, who owns that access, and when it expires","Why least-privilege and just-in-time access principles are not new but urgently need extension to non-human identities","How to distinguish between governance friction that slows adoption and governance friction that makes adoption sustainable"]},"argument_outline":[{"label":"Scale shift","point":"By end of 2026, 40% of enterprise applications will include AI agents, up from under 5% twelve months prior — a structural, not merely statistical, change.","why_it_matters":"The speed of adoption means security teams are already behind; the gap is widening, not narrowing."},{"label":"Inventory gap","point":"Most organizations cannot answer how many AI agents are running, who deployed them, or what they can access. Only 1 in 7 agents received a formal security review before production deployment (Gravitee data).","why_it_matters":"You cannot govern what you have not catalogued. Controls are meaningless without a baseline inventory."},{"label":"IAM was built for humans","point":"Traditional identity and access management assumes human actors with managers, roles, and offboarding cycles. AI agents have none of these anchors.","why_it_matters":"Applying human-centric IAM to machine actors leaves structural blind spots in access governance."},{"label":"Real breach anatomy","point":"The Salesloft/Drift incident: attackers compromised OAuth tokens linked to an AI chatbot and accessed Salesforce environments of 700+ organizations. The breach was invisible because malicious queries were indistinguishable from legitimate bot behavior.","why_it_matters":"The threat model for AI agents is not just unauthorized access — it is invisible misuse of authorized access."},{"label":"Incentive asymmetry","point":"Business teams deploy agents for immediate operational gain; security costs from ungoverned agents are paid by different teams, later, under different budgets.","why_it_matters":"This structural misalignment produces chaotic inventories and ungoverned permissions by default, not by negligence."},{"label":"Known principles, new urgency","point":"Least privilege, just-in-time access, ephemeral tokens, and privileged access management are not new concepts — they simply have not been extended to AI agent identities at scale.","why_it_matters":"The solution exists; the gap is organizational will and process integration, not technical invention."}],"one_line_summary":"Enterprise AI agents are proliferating faster than identity governance frameworks can track them, creating a structural security gap that traditional IAM was never designed to handle.","related_articles":[{"reason":"Directly complementary: documents a real incident where an AI agent operating autonomously caused catastrophic data loss, illustrating the governance failure mode described in this article","article_id":12270},{"reason":"Relevant context: Salesforce's agentic enterprise design direction is directly implicated in the identity governance problem — understanding the product trajectory informs the security challenge","article_id":12290}],"business_patterns":["Deployment-security asymmetry: business teams deploy fast, security teams respond after the fact — a recurring pattern in every new technology wave","Non-human identity sprawl: service accounts, API keys, and automation scripts already outnumbered human users before AI agents; AI accelerates an existing structural problem","Invisible misuse pattern: compromised trusted identities are indistinguishable from legitimate behavior — the threat hides in normal operations","Incentive misalignment producing systemic risk: costs of ungoverned deployment are externalized to different teams and future time periods","Governance lag: security frameworks are always built for the previous generation of actors; AI agents are the current generation without a framework"],"business_decisions":["Whether to require formal security review as a precondition for AI agent deployment, not a subsequent step","Whether to assign explicit human ownership and accountability for every AI agent in production","Whether to implement ephemeral, task-scoped access tokens for AI agents rather than persistent credentials","Whether to invest in real-time behavioral monitoring of AI agents, not just static permission auditing","Whether to conduct an immediate inventory of all AI agents currently running in production environments","Whether to integrate identity governance tooling (PAM, just-in-time access) with AI agent deployment pipelines","Whether to restructure incentives so that security costs of ungoverned agents are visible to the teams deploying them"]}}