Banning AI Tools Does Not Stop Employees Using Them. It Just Stops Organisations Seeing It Happen

Picture a developer on a deadline - a bug has surfaced in a critical module, the fix isn't obvious, and the fastest path to an answer is a quick paste into ChatGPT. There is no malicious intent, nor is there any awareness that proprietary code has just crossed the organisation's boundary. No alert, no file transfer log, no audit trail. By the time someone asks where the IP went, it is already gone.

This is not a hypothetical edge case. It is, by most accounts, a daily occurrence inside enterprise environments — and it represents only one thread of a problem that has grown considerably more complex. Shadow AI, the use of AI tools outside organisational approval and governance, has moved from a compliance footnote to an active security liability. Fortinet research found that 77% of organisations experienced insider-driven data loss in 2025, with 41% reporting financial impact of between $1 million and $10 million for their most significant incident alone. Gartner has projected that by 2030, 40% of organisations will suffer security incidents directly attributable to shadow AI risks.

The numbers point to an industry that has embraced the productivity benefits of generative AI without yet building the governance infrastructure to match. Five security leaders — spanning enterprise cybersecurity, identity risk, quantum-safe AI infrastructure, government-aligned resilience, and managed services — set out the full scope of the problem, and what it will actually take to bring it under control.

An invisible supply chain

Mohammed Aboul-Magd, VP Product at Sandbox AQ, defines Shadow AI as a structural continuation of a problem the industry has faced before. "If you think about the shadow AI problem, it isn't actually too dissimilar from the software supply chain problem we've had historically," he said.

"Something like 90% of all software you write is actually written by somebody else, because so much of modern software development relies on open-source libraries and open-source code that you pull into your environment and then build on top of. We're seeing similar parallels in how organisations are importing and utilising AI within their environments."

In practice, that parallel manifests in two ways. The first involves developers pulling AI models from third-party repositories — including open-source sources such as Hugging Face — and embedding them into production applications without formal vetting. The second involves employees accessing consumer AI tools via browsers, often through personal accounts, to accelerate their daily work. Both categories create exposure, but through entirely different mechanisms — and most organisations cannot see either of them clearly. Fortinet's data makes the visibility gap concrete: 72% of organisations reported being unable to see how users interact with sensitive data across endpoints, cloud applications, and generative AI platforms.

Debojyoti Dutta, Chief AI Officer at Nutanix, described how the leakage mechanism operates even without a single file upload. "Inadvertently, one may be revealing some intellectual property without even realising it," he said. "I'm not even uploading anything explicitly — uploading is the obvious case — but even through conversation, information can leak. Most personal contracts with AI providers today mean you are effectively giving up data sovereignty. Your data is no longer fully private unless you are on higher-tier paid plans, and even then, there are questions."

Jamal Labani, CEO and Co-Founder of Solidrange, a Saudi cybersecurity company specialising in governance, risk, compliance, and cyber resilience across government and enterprise sectors, locates the root cause in organisational incentive structures rather than employee negligence.

"Shadow AI is a symptom of a governance gap," he said. "It flourishes when the demand for speed outpaces the delivery of safe tools. A mature response moves beyond blanket bans toward visibility and controlled enablement."

Data leakage: the visible tip of a larger problem

Of all the risks Shadow AI creates, data leakage is the most immediate and the most consistently observed. Mortada Ayad, VP Sales, META, Delinea, described the mechanism with precision.

"The moment that data is entered into an external AI system, the organisation no longer has clear control over how that data is handled," he said. "There are open questions around storage, processing, and whether that data may be used to train or improve models. Even if providers offer assurances, organisations often do not have direct visibility or enforceable guarantees in practice."

That loss of control is compounded by the behavioural reality of how AI tools are used. Kylie Watson, Head of Cybersecurity, Asia Pacific, Japan, Middle East and Africa, DXC Technology, described the dynamic that makes restriction counterproductive. "If you don't provide tools that employees are comfortable using — tools that they feel actually help them do their job — then they will find their own ways to use AI anyway," she said. "That might mean using their personal devices. It might mean paying for a subscription themselves and then using those tools outside of the organisation's environment. From a workflow perspective, it looks efficient. But from a security perspective, you've now got data leaving the organisation in a way that is completely outside your control."

The concern spans the workforce, but developers are particularly exposed. Tony Zabaneh, Director, Systems Engineering – South Middle East at Fortinet, explained why their usage carries consequences that go beyond a single incident. "When a developer pastes proprietary code into a public prompt to solve a technical issue, there's no file movement, no policy violation, and no alert," he said.

"Yet the organisation's IP has just left the building. Without context-aware visibility, this kind of behavioural exfiltration remains invisible." Fortinet's own data underscores the existing anxiety: 56% of organisations said they were very concerned about sensitive data being shared with tools such as ChatGPT, yet only 12% felt fully prepared to respond to it.

Labani raised a subtler risk that is typically unmeasured: what he calls derivative IP loss. Individual prompts, each innocuous in isolation, can collectively expose product strategies, internal roadmaps, and proprietary methodologies — externalising institutional knowledge fragment by fragment, without ever triggering a single policy alert. "Repeated prompts can reveal product roadmaps or internal strategies, externalising institutional knowledge fragment by fragment," he said. "Organisations rarely see this accumulating until the damage is done."

Dutta extended the risk further, pointing to a consequence that organisations rarely account for: the possibility that leaked data resurfaces. "That data could later surface when someone else — even a competitor — is asking similar questions," he said. "That is even more concerning because you cannot trace it after that. It becomes untraceable."

Agents, identities, and the attack surface no one is counting

Data leakage, significant as it is, represents only the first layer of a more complex risk architecture. The proliferation of AI agents — autonomous systems that interact with enterprise data and internal tools on behalf of users — introduces a qualitatively different problem, one that most governance frameworks were not designed to manage. Aboul-Magd described the stakes through a concrete example. "If an organisation does not know which AI agents exist inside its environment and what permissions they have, the damage they could cause can be significant," he said. "We saw examples with OpenClaw, including a widely discussed case where the AI agent accidentally deleted an entire inbox, and there was no immediate kill switch to reverse it at the time."

The scale of the agent problem is already significant and accelerating. Research from BeyondTrust's Phantom Labs team recorded year-on-year growth in AI agent deployments of 466.7%, with some organisations operating more than 1,000 AI agents — many of which their own security teams were unaware of. Analysts estimated two years ago that non-human identities outnumbered human accounts 46 to 1; more recent figures put that ratio at 82 to 1.

Delinea's 2026 Identity Security Report found that while 82% of respondents expressed confidence in their ability to discover non-human identities with access to production systems, fewer than one in three organisations actually validated non-human identity and AI agent activity in real time.

Ayad placed identity at the centre of the organisational risk picture, distinguishing between what is most visible and what is most consequential. "At the organisational level, the more critical issue is identity compromise," he said. "Identity acts as a central control point within modern systems. AI tools, by design, often interact with multiple platforms, datasets, and workflows. If those permissions are mismanaged, or if an identity is compromised, the attacker effectively inherits the same level of access — and can operate within the organisation in a way that appears entirely legitimate, making detection more difficult."

External threats are compounding through the same channels. AI has materially weakened one of the most reliable signals employees have historically used to detect phishing: the telltale grammar errors and unnatural phrasing in a malicious email. When AI-generated communications are indistinguishable from legitimate business correspondence, that defensive layer disappears. Aboul-Magd added a further vector specific to agentic tools. "In the OpenClaw ecosystem, many of the third-party skills that agents downloaded were discovered to contain malware or other problematic code," he said. "Some performed actions users didn't want, and some even attempted to extract credentials and send them to malicious actors."

Why detection fails: behaviour, not installation

The detection challenge posed by Shadow AI is structurally different from the one posed by traditional Shadow IT, and several leaders identified this distinction as the reason existing security controls consistently fall short. Watson framed the contrast in operational terms.

"With Shadow IT, the way you would typically detect it is by identifying applications being installed — there's a visibility point there," she said. "With Shadow AI, it's not necessarily something that's being installed. It's used in applications, often through a browser. So instead of detecting when an app is installed, you're trying to detect behaviour within an app that is already in use. The signals are less obvious, and the activity is more embedded in normal workflows."

Legacy tooling is not built to meet that challenge. Zabaneh was direct about the limitation. "Static policies and content inspection can't keep up with fluid, user-driven workflows," he said. The data reinforces the point: Fortinet's research found that most insider incidents stem not from malicious intent but from negligence or a lack of visibility, and that legacy data loss prevention tools designed for data in motion are no longer sufficient as information flows freely across generative AI tools, SaaS applications, and unmanaged endpoints.

Watson also challenged the assumption — held in some organisations — that post-event detection through SIEM or security operations centres constitutes an adequate response. "By the time you see it, it's already happened," she said. "The data has already been uploaded. The exposure has already occurred. With AI, the speed and scale make this even more challenging — things happen quickly and can scale across users and systems faster than any reactive response can catch."

Aboul-Magd identified a further structural dimension that makes the problem self-compounding: the rate of change in the AI tools themselves. "Organisations are moving quickly to deploy AI because they want productivity improvements and cost savings," he said. "Security considerations sometimes lag behind that deployment. Historically, security controls often follow technology adoption, and we are currently in that phase where AI is evolving rapidly while governance frameworks and visibility tools are still catching up."

The World Economic Forum's Global Cybersecurity Outlook 2025 identified the widening gap between AI adoption speed and security readiness as one of the defining systemic risks of the current period, noting that fewer than one in five organisations felt confident they had adequate visibility into the AI systems operating within their environments.

Governance over restriction: building a response that holds

A consistent conclusion emerged across all five perspectives: blanket restrictions on AI usage do not work, and in many cases accelerate the underlying problem by pushing usage into environments that are even harder to monitor. Labani was direct on this point. "Blanket bans on AI are largely performative," he said. "If leadership demands high output but restricts the tools to achieve it, employees will route around security. This drives usage underground, further reducing visibility. The most effective strategy is to provide sanctioned AI tools that are genuinely useful enough to compete with public alternatives."

He proposed a practical governance model built on risk stratification — a traffic-light framework that aligns permitted AI usage with the sensitivity of the data involved: green for public information in lower-risk tools, yellow for internal data in approved enterprise environments, and red for high-value code or regulated data that is strictly ringfenced. Watson added a dimension that policy discussions often overlook: different employee groups require fundamentally different interventions. More experienced employees may need encouragement to adopt AI, while younger employees may use it in ways that introduce risk through over-reliance or inattention to data sensitivity. "You can't just apply a single model and expect it to work for everyone," she said. "At the end of the day, AI is an augmentation tool. There needs to be a human in the loop, and that human needs to understand how to use it responsibly."

Dutta drew on Nutanix's own internal experience to describe what a functional governance model looks like in practice. "We have a centre of excellence — a cross-functional body that defines rules, guidelines, and enables usage," he said. "We have enabled multiple AI tools within the company, both general-purpose AI assistants and tools specific to our needs. The key point is that while AI improves productivity, employees are still responsible for the output. Whether it's software engineering, legal, finance, or marketing, you cannot just take AI output and not own it."

Aboul-Magd identified visibility as the prerequisite that makes every other governance response functional. "The first step is visibility," he said. “Organisations cannot understand their risks if they do not have visibility into what AI systems exist in their environment. We recommend that organisations start by building a comprehensive inventory of AI usage across their infrastructure — identifying which AI models are present, which AI agents are used, which MCP servers are running, and which third-party AI tools employees interact with. Once you understand what exists inside the environment, you can start building the policies and controls needed to manage those risks effectively."

Zabaneh outlined the technology architecture that underpins a durable response, pointing to the convergence of data loss prevention and insider risk management as the defining platform requirement of this moment. "Next-generation platforms have emerged that unify data loss prevention with insider risk management," he said. "Built for modern work, they combine traditional DLP capabilities, real-time behavioural analytics, visibility into sanctioned and unsanctioned SaaS and AI tools, and dynamic enforcement across endpoints, cloud, and users. What used to be separate domains — data protection and user risk — are now converging into a single, behaviour-first control plane."

Ayad brought the people dimension into focus, identifying the shift in organisational posture that technology alone cannot deliver. "The focus needs to shift from control to accountability and awareness," he said. "Since organisations cannot realistically eliminate the use of these tools, they need to influence how they are used. An informed user is far less likely to introduce risk, even when using unsanctioned tools."

The window to act is narrowing

The trajectory of Shadow AI is not a future risk to be mapped and prepared for. The tools are already in use, the agents are already operating, and in most organisations, the data has already begun to move. Delinea's research found that 90% of respondents acknowledged at least some form of identity visibility gap, with discovery gaps in AI-related environments occurring at nearly double the rate of legacy systems. The governance deficit is not theoretical — it is measurable and widening with each new platform update and each new agent deployment.

Dutta offered a note of measured progress that the data elsewhere supports. "Earlier, there were many cases where people were leaking sensitive code into tools like ChatGPT," he said. "Now, with better awareness and better enablement, that is happening less. The key is enabling people properly. If you don't enable them, resistance builds, and that can actually increase risk rather than reduce it."

Watson's summary of what it takes to close that deficit has become a consensus position across the security community. "Visibility is the core of it," she said. "You can't control everything all the time. There will always be cases where something slips through. But if you have visibility, you can at least understand what's happening — across the tools used, the data entered, how policies are enforced, and what's detected across your environment. If you have that, you can manage the risk. If you don't, you're essentially operating blind."

Labani's framing brings the strategic objective into sharpest relief. "True control requires a shift from monitoring software to monitoring the movement of information across AI-mediated workflows," he said. "The goal is not to stop AI use, but to move it from the shadows into a visible, governed, and risk-proportionate environment." The organisations that treat Shadow AI as an operational reality — rather than a policy problem to be solved by prohibition — are the ones building the governance capability to manage what comes next. The developer will keep working on the deadline. The question is whether the organisation can see what they are doing.

Next
Next

Meta Has Built an AI That Predicts How Your Brain Responds to Content — and Its Own Ad Business May Be the First Beneficiary