Enterprises Are Running AI at Scale. Their Governance Isn’t.
Eighty-five per cent of enterprises now describe artificial intelligence as central to their business strategy, deployed across multiple functions or embedded in core operations. The decision to adopt AI has been made. The harder question of how to govern what is already running has not been answered. That is the central finding of Optro’s 2026 Risk Intelligence Report, which surveyed 822 audit, GRC, and IT decision-makers at organisations with annual revenues of at least $100 million across the United States, Canada, Germany, and the United Kingdom.
The consequences of that governance gap are already being felt. In the past 12 months alone, 40% of organisations reported inaccurate AI outputs, 33% experienced policy violations, 28% received customer complaints linked to AI systems, 27% suffered data breaches, and 26% faced regulatory action. These are not warning signs of a future problem. They are the current operating reality for enterprises that have deployed AI faster than they have built the infrastructure to oversee it.
The Workforce Is the Risk Surface Nobody Is Governing
The report is clear that the most significant AI risk surface in the enterprise is not the model powering the technology. It is the workforce, and it is largely ungoverned. When respondents were asked to identify the primary driver of risky AI-usage behaviour, 34% cited staff inputting sensitive data into AI tools as the top concern. A further 21% attributed the majority of incidents to insufficient employee training rather than malicious intent, and 21% pointed to the organisational pressure to move quickly. Near-universal AI adoption is taking place in an environment where governance is designed around systems and policies, not around the daily decisions of individuals under pressure.
Kristin Colburn, Leader of Data and AI Governance at Dayforce, described the challenge directly. “AI adoption is moving faster than many organisations’ ability to fully understand and govern how it’s being used,” she said. “To keep pace, governance needs to evolve from reactive and become proactive oversight to a continuous, integrated capability that helps organisations better understand AI use across the enterprise and manage the risks that come with it.”
Shadow AI compounds the problem. Only 25% of organisations report comprehensive visibility into how employees are actually using AI. A further 35% say shadow AI use is pervasive or widespread across their organisation, with another 45% describing it as moderate in prevalence. Governance cannot function against an exposure that has not yet been mapped.
Shared Responsibility Has Produced an Accountability Vacuum
The structural failure underpinning the behavioural risk is one of distributed ownership. AI governance responsibility is spread so widely that no single function holds sufficient authority to act. The IT department carries the largest share at just 25%, followed by risk management at 18%, cross-functional governance arrangements at 17%, and dedicated AI governance teams at only 10%. When incidents occur, responsibility diffuses further across risk, compliance and internal audit (29%), executive leadership (27%), and IT and engineering (24%).
Most strikingly, the authority to shut down an AI system sits simultaneously across five functions: leadership and risk each at 46%, IT at 43%, and compliance and security each at 42%. No single owner, no clear kill switch. The technology landscape mirrors this fragmentation. Most organisations manage AI governance through disconnected tools, leaving them without a unified view of risk. The result is a programme that lives primarily in documents rather than functioning as a system of action.
The confidence gap this produces is telling. Over half of leaders (58%) believe their governance controls are keeping pace with AI adoption. Yet only 18% have active mitigation covering most or all identified risks, and only 19% can identify cross-functional risks in real time. Those closest to technical execution see the problem most clearly: 76% of leaders believe they could respond decisively to an AI incident, but that figure drops to 66% among CISOs.
Investment Is Moving, But the Gap Is Still Widening
The research offers reason for measured optimism. Nearly three-quarters of respondents (72%) expect their GRC technology budgets to increase over the coming year, with AI governance solutions (43%), regulatory compliance tools (41%), and GRC platform upgrades (38%) ranking as the top three investment priorities. Automated risk assessments, GRC platform integration, and centralised AI inventory lead demand for future governance capabilities.
Guru Sethupathy, GM of AI Governance at Optro, argued that the commercial case for governance has been misunderstood. “Governance should not be viewed as a barrier to innovation, but as foundational for enabling organisations to deploy high-integrity AI,” he said. “Our research shows when monitoring and oversight are integrated into the AI lifecycle, organisations move faster and more securely. As agents increasingly perform complex tasks, the core work of the organisation becomes the oversight and governance of those AI agents.”
That language matters because the regulatory environment is no longer theoretical. Following the 2025 rollout of the EU AI Act, US state-level regulation has accelerated rapidly. California now mandates disclosure of AI-generated content, Colorado has enacted the country’s most comprehensive cross-sectoral AI law targeting high-risk systems in healthcare and housing, and New York has passed landmark safety legislation. What began as a question of organisational readiness has become a question of regulatory standing.
Only 34% of organisations report AI governance programmes that are strategic and continuously improving. The vast majority are operating in reactive, fragmented states that the report describes as structurally unable to keep pace with the speed of AI deployment. The report’s conclusion is pointed: organisations that fail to close the AI oversight gap now will find it structurally harder to bridge later. For GRC leaders, the choice is build governance infrastructure now, or manage the consequences of its absence.