How global enterprises are turning EU AI Act compliance into a competitive moat before the August 2026 deadline

By August 2026, the European Union's AI Act will become widely applicable, with non-compliance carrying penalties of up to €35 million or 7% of worldwide annual turnover, whichever is higher. For global enterprises, that deadline is no longer a regulatory abstraction. It is a board-level liability with the potential to wipe out a full year of operating profit, and it sits inside a wider patchwork of overlapping regimes stretching from Brussels to Washington, Singapore to the Gulf. The companies that are pulling ahead are the ones treating compliance not as a cost of doing business in Europe, but as a structural advantage they are quietly compounding.

The shape of the trap has come into focus over the past year. Multinationals are being asked to qualify thousands of AI systems - including ones built before the concept of AI regulation existed - against a tiered risk framework that still has open interpretive questions. They are doing this as their own pilots move into production, as agentic systems start making autonomous decisions within third-party software, and as the rules themselves continue to shift.

The European Parliament is now considering postponing certain high-risk provisions because the underlying technical standards may not be ready in time. Several countries are drafting their own variants of the Act. Penalties on this scale, paired with this much ambiguity, have changed the conversation in the boardroom.

The end of offloading: why no one can outsource the AI Act

For most of the past two years, a cottage industry of governance vendors has marketed itself as a turnkey solution to the AI Act. Buy the platform, the pitch ran, and the compliance burden moves to us. Jacob Beswick, Senior Director of AI Governance Solutions at Dataiku, has watched this market mature into something more honest. "A lot of AI governance providers come with the answer. They say that if you use us, you'll be compliant. In doing that, and if they are purchased by a company, that company is offloading the compliance burden to that tech provider," he says, describing what is now the most common failure mode in enterprise procurement.

Dataiku's own approach starts somewhere different. Rather than selling compliance as a product, Beswick says the company works with customers to articulate their goals, then builds a governance exercise around those priorities. The distinction between buying compliance and building it increasingly separates the companies that will be ready in August from those that will not.

The practical first move, Beswick argues, is qualification. Every AI request that enters an organisation, whether through Jira, email, or a conversation in a corridor, has to be tagged against a regulatory framework before development begins.

"This request has come in now, we need to qualify it against certain elements of a framework, whether that's the AI act, whether that's model risk management, whether that's something else. So it's being able to say you, Johnny, requested this thing, what is the goal of this thing, so what's the intended purpose which lines up with the AI act," he explained. The point is not bureaucratic. It determines whether a system is prohibited, high risk, limited, or minimal, and that determination shapes everything that follows.

The patchwork problem and the GDPR template

The August 2026 deadline is the most visible date on the calendar, but it is far from the only one enterprises are tracking. The Gulf states, the United Kingdom, Brazil, Canada, India and at least a dozen US state legislatures are at varying stages of drafting their own AI rules. Guru Sethupathy, General Manager of AI Governance at Optro, believes the fragmentation will be less severe than headlines suggest, largely because regulators are watching each other. "I expect the EU AI act to be a bit of a standard bearer for many other regulations that dovetail off of it. And you're starting to see that already. Several European countries are building on the EU AI Act for their own territories. Italy, for example. And I expect the Middle East to follow the EU act quite closely," he said.

The precedent he points to is GDPR, which became the de facto global baseline for data protection because the cost of running parallel regimes was too high. Sethupathy expects a similar gravitational pull around the AI Act, with two practical strategies emerging for multinationals: either default to the strictest law and let everything else be subsumed, or use technology to map specific obligations on a jurisdiction-by-jurisdiction basis. The first approach trades efficiency for simplicity. The second trade simplicity for precision.

Tim Pfaezer, SVP and General Manager for EMEA at Veeam, is seeing a third response in the field, particularly among regulated incumbents. The starting point, he argues, is no longer the regulation itself but the data underneath it. "You have to understand your data. Please clarify: what can I place regarding innovation, and at what point do I have a physical or psychological border to say I need to use the data more to be strictly compliant? So you have to map all your AI systems to risk. That's what you have to do. Otherwise, you're getting completely out of hand," he said.

Walid Issa, Senior Manager for Solutions Engineering across the Middle East and Africa at NetApp, makes a similar point from the infrastructure side. "The biggest challenge isn't regulation itself, it's visibility into data. When data is distributed across environments, compliance becomes inconsistent by default. The friction lies in understanding how data moves and how it's used," he said. Without a single source of truth for how data flows, every regulatory question becomes a fresh archaeological dig.

Innovation, friction, and the false trade-off

The argument that compliance kills innovation is older than the regulations themselves, and it remains the most common reason enterprises delay action. Sethupathy is unsentimental about it. "Imagine tomorrow you find out that there are no traffic lights, no stop signs, no speed limits, no traffic cops, no seat belts, and no restrictions on who can drive. Anybody from 5 years old to a drunk driver to an old person can drive. Would you feel more comfortable driving on the road or not?" he asked. The analogy lands because the alternative being defended is rarely articulated. Governance, in his telling, is what allows trust to scale in the first place.

For Pfaezer, the answer is architectural. "You can do some modular governance frameworks that are then adaptable by regions, depending on where you do business. So, like a click, copy and paste, that you could do. You could embed compliance even into development, so that you're actually starting a new project and not taking something that's done and then post build, you try to reach in," he said. The economics of retrofitting governance onto a finished system are punishing; the economics of designing it in at the requirements stage are roughly invisible.

Beswick describes the same principle, working from the regulatory text downward. He suggests organisations start with a legal team distillation that narrows a 400-page regulation to a handful of provisions that actually affect the business. From there, technology can do the synthesis work, identifying overlaps between regulators and automating the metadata capture that governance regimes demand. The synthesis layer, he notes, is doing more work than most organisations realise.

Issa identifies the bottleneck even more bluntly. "Governance breaks down when it relies on manual processes. To scale, it needs to be embedded into how systems operate. Automated policy enforcement and monitoring allow governance to keep pace with AI. This is especially important in environments with distributed hybrid data, where complexity is already high and manual controls quickly become a bottleneck," he said.

The board has woken up

The single biggest behavioural shift over the past 18 months has been the migration of AI risk from the IT stack to the boardroom. Pfaezer describes the change as a hard discontinuity. "It's shifting from being reactively compliant to a managing and proactive perspective. And because of the money involved and the penalties around, that is now a board-level priority. So the board is looking way more into their IT stack, and way more into IT operations than ever before," he said.

Issa is seeing the same. "Yes, the scale of potential penalties has elevated AI risk to a board-level priority. Leadership teams are now more engaged in understanding model governance, data lineage, and compliance exposure. The biggest shift has been treating AI risk like financial or cybersecurity risk, with structured oversight, clearer accountability, and tighter alignment between legal, security, and engineering teams," he said.

For Alessandro Liotta, EMEA Regulatory Affairs Lead at Fortinet, the calculus pre-dates the penalty regime. "Even before legal obligations and the risk of high penalties, our board's decisions were driven by market expectations that our AI products are safe, compliant and that we use innovative technologies responsibly. Setting high common compliance standards matched with high fines in case of a breach is essential to level the playing field, and it also helps to build a safer and more resilient society," he said.

Liotta noted that Fortinet has applied robust governance even in cybersecurity AI, a category formally excluded from the Act's high-risk classification, because customer trust in the sector cannot tolerate the alternative.

That trust is becoming its own market. Sethupathy argues that as the underlying models commoditise, with Claude, GPT and Gemini increasingly indistinguishable on raw capability, the real differentiation moves to governance. He no longer thinks of regulation as an impediment but as the mechanism through which trust gets built. The companies he watches keep up with are not the ones with the biggest compliance budgets, but the ones that have already moved governance from periodic audits to a continuous discipline, with logs, transcriptions, and reasoning trails for every agentic decision a system makes.

Compliance as a competitive moat

The strategic question, then, is whether early movers extract a durable advantage from all this work. Beswick is cautious but pointed. "I think the true answer to that question will be borne out once the high-risk requirements are put in place. I think then you'll see companies that got ahead not lagging, and you'll see companies that thought, “I'll figure it out when it hits," lagging. Many of the companies we work with are trying to get ahead of it," he says.

Pfaezer sees a sectoral split. "The more regulated it is, the more compliance is the key, the more is the compliance adherence also a trust currency, really, that you have," he says. For financial institutions and critical-infrastructure operators, compliance maturity is already a winning sales argument. For consumer-facing businesses with millions of customer interactions a day, agility still tends to win the trade-off, though that gap is narrowing.

Issa puts the upside in operational terms. "That shift is already visible. Organisations that have strong control over their data are able to move faster and with more confidence. Creating a unified view of data reduces friction, improves trust and enables more consistent outcomes. Over time, that combination turns compliance into something that supports growth rather than slowing it down," he says.

Liotta argues the discipline pays in resilience as much as in revenue. "At Fortinet, we have built on our solid commitment to following high industry standards to ensure our organisation is resilient and we develop safe products. Our AI governance follows strict principles of accountability, transparency, fairness and data protection. Most importantly, we follow the Secure AI by Design approach, which sets out to incorporate security controls from the outset when developing AI systems," he says. The cost of bolting governance on after the fact, he notes, is consistently higher than the cost of building with it.

The legacy problem nobody wants to talk about

The hardest conversations inside enterprises right now are not about new systems. They are about the AI quietly running in production from before anyone thought regulation would arrive. The European Commission has made it clear that legacy systems must be qualified to the same standard as new ones, and Beswick says the inventory problem is now one of the most common topics of conversation Dataiku has with customers. Many businesses simply do not know all the AI systems running across their organisation, a situation made worse by generative AI agents embedded in third-party software that no one centrally tracks.

Pfaezer is clear about what most companies will find when they look. "Legacy AI systems don't even have proper documentation, so it's not there. So how does this affect your data? You've got gaps in who's reading, owning it, who can explain, and who has a data lineage. You've got inconsistent governance, probably across business units," he says. The fix, he argues, is partnership: retrofitting controls across vendors, customers and regulators rather than each enterprise rebuilding from scratch.

Liotta agrees that the international dimension makes the problem harder still. "The lack of international regulatory standards also creates obstacles, with the EU having paved the way for a strict set of requirements, while other jurisdictions like the US are so far favouring a less rigid approach. This means that operating internationally and navigating different regulatory regimes is very hard. This is why we set our bar very high and follow an open and transparent approach to inspire trust in our organisation and products," he says.

Sethupathy is most worried about the speed of the technology relative to the speed of the rules. "States and governments and countries just do not move quickly. If you look at the EU AI Act, it doesn't consider agents, for instance, because agents didn't exist in 2023, 2024," he says. The Act will be amended; frameworks such as ISO 42001 and Singapore's AI Verify will fill gaps; and continuous monitoring will replace point-in-time audits. The companies that have already industrialised that shift are the ones for whom August 2026 is a milestone rather than a cliff edge.

Next
Next

Healthcare's AI paradox: 91% see strong returns, but only 31% are ready to scale, Riverbed study finds