Enterprise AI Agents Surge 467% as 'Shadow AI Workforce' Raises Security Alarm - BeyondTrust Report
Enterprise environments are accumulating AI agents at a pace that security infrastructure was never designed to manage. Research published today by BeyondTrust's Phantom Labs team, drawn from live telemetry through its Identity Security Insights platform, puts year-on-year growth in AI agent deployments at 466.7% — a figure researchers say is being driven by the rapid embedding of AI capabilities across mainstream enterprise software, from Microsoft Copilot and Salesforce to coding assistants and collaboration tools such as Jira and Confluence.
The scale of deployment is already significant: some organisations examined in the research operate well over 1,000 AI agents, many of which their own security teams were unaware of. BeyondTrust describes the phenomenon as a "shadow AI workforce" — a growing layer of machine identities that interact with APIs, inherit user and service-role permissions, and act autonomously across enterprise systems, all largely outside the governance frameworks that apply to human accounts.
"Organisations are introducing thousands of new machine identities through AI agents, often without realising the level of access those agents inherit," said Fletcher Davis, Director of Research for BeyondTrust Phantom Labs. "In many environments we studied, AI agents were operating with privileges comparable to human administrators. As organisations move from chatbot use cases to more autonomous agentic AI, the identity attack surface will only expand."
A governance gap takes shape
The research arrives as agentic AI moves from proof-of-concept into operational deployment. According to Gartner, by 2028 at least 15% of day-to-day work decisions will be made autonomously by AI agents — up from less than 1% in 2024. The enterprise identity landscape is shifting accordingly. BeyondTrust's findings indicate that machine and AI identities now outnumber human identities in many assessed environments, and that the ratio is accelerating.
Unlike traditional service accounts, AI agents can inherit permissions from users or service roles, interact with APIs and enterprise tools, and operate across systems without requiring human authorisation at each step. That combination of autonomy and privilege creates attack paths that conventional identity and access management tooling was not built to detect. Phantom Labs researchers identified several patterns of concern across assessed environments: shadow AI agents deployed through low-code platforms or embedded enterprise applications and operating outside formal IT governance; agent identities that appear correctly governed in static reports but can elevate privileges dynamically during use; and long-lived API keys and static credentials assigned to AI agents with no rotation policies or lifecycle controls in place.
The World Economic Forum's Global Cybersecurity Outlook 2025 flagged the widening gap between AI adoption speed and security readiness as one of the defining systemic risks of the current period, noting that fewer than one in five organisations feel confident they have adequate visibility into the AI systems operating within their environments.
Real-world breach scenarios
The new findings extend earlier work from Phantom Labs that has probed how specific AI platforms introduce identity and privilege risk in practice. In a previous investigation, researchers demonstrated a real-world breach scenario involving Microsoft Copilot Studio in which AI agents leaked secrets and granted unauthorised access to cloud infrastructure, despite existing security controls being in place. Separate research into AWS Bedrock uncovered how long-term API keys can automatically generate IAM users with excessively broad permissions — a finding that led to the release of an open-source detection tool, bedrock-keys-security, now available on GitHub.
Taken together, the research points to a structural problem rather than a configuration error. The platforms driving AI agent adoption were not designed with the assumption that agents would accumulate, interact, and act at this scale. The governance frameworks organisations use to manage human identities have not been extended to cover them.
A rapidly expanding attack surface
The identity security market is responding. According to MarketsandMarkets, the global identity and access management market is projected to reach $34.5 billion by 2029, with machine identity management — covering service accounts, API keys, certificates, and now AI agents — emerging as the fastest-growing segment. The extension of identity governance to AI agents represents both a technical and organisational challenge: organisations must first gain visibility into what agents exist and what they can access before any remediation is possible.
BeyondTrust's Identity Security Insights platform is designed to address precisely that baseline requirement — mapping AI agent identities, surfacing hidden privilege relationships, and identifying attack paths across hybrid and cloud environments, with remediation guidance aligned to the MITRE ATT&CK framework. The company is offering a complimentary Identity Security Risk Assessment that includes AI agent exposure analysis.
The 466.7% growth figure covers a single year. If AI agent deployment continues at anything approaching its current trajectory, security teams face an identity governance problem that grows materially more complex with every platform update, every new enterprise application, and every developer who reaches for a low-code automation tool. The question BeyondTrust's research poses is whether organisations can close the gap between the speed of AI adoption and the maturity of the governance frameworks meant to contain it.