The AI Gold Rush Has a Hidden Kingmaker
In the AI gold rush, it’s tempting to split the world into two camps. The first are the miners: the model builders such as OpenAI and Anthropic, digging for breakthroughs. The second are the shovel sellers: the chip and cloud giants — NVIDIA, AMD, Intel, Microsoft and the hyperscalers — selling the compute that turns those breakthroughs into products.
That framing still misses who and what decides how fast this boom can grow.
Before anyone can sell a shovel, someone has to make the tools needed to manufacture it. In the AI economy, that “toolmaker behind the toolmakers” is ASML - the Dutch company whose machines help produce the most advanced chips on earth. ASML isn’t in the spotlight the way model labs or GPU makers are, yet it quietly shapes what’s possible by influencing how quickly the world can produce cutting-edge chips in the first place.
Hardware alone also isn’t the finish line. Companies can’t roll AI out across their business if they can’t trust it. Trust here isn’t sci-fi control; it’s basic operational confidence: Where did this output come from? What data fed it? Who approved it? What happens if it’s wrong?
This story, then, isn’t only about miners and shovel sellers. It’s about two kingmakers. One sits upstream of ASML, shaping the pace of the compute boom by enabling the chips that power modern AI. The other sits downstream - enterprise governance, the rules and controls that determine whether AI can be used safely, repeatedly and at scale.
ASML: the cast maker in the shovel economy
ASML’s role isn’t subtle. It’s structural.
Reuters has described ASML as the sole supplier of extreme ultraviolet (EUV) lithography tools used to manufacture the most advanced chips, noting EUV systems cost roughly a whopping $180 million each. If the GPU boom is the visible face of the AI economy, ASML is part of the hidden skeleton.
NVIDIA can dominate training and inference, but NVIDIA cannot build cutting-edge GPUs without leading-edge manufacturing, and leading-edge manufacturing now runs through lithography capability that ASML dominates.
ASML’s next-generation bet pushes the price tag into the realm of industrial mythology: High NA EUV. Reuters has reported that High NA tools are the size of a double-decker bus and cost more than $350 million each.
ASML has also said it has shipped its newest High NA systems to customers, a sign that this isn’t a lab curiosity but a commercial race.
That’s why the “shovel seller” metaphor is both correct and incomplete. The shovel seller makes the money until the supply chain’s upstream dependency shows up, and then the centre of gravity shifts. The closer you get to the frontier of manufacturing, the less the market behaves like a free market, and the more it behaves like leverage.
ASML’s leverage is also shaped by geopolitics. Reuters has reported that ASML has been restricted from selling its most advanced EUV tools to China since 2019 and has never sold an EUV tool there. It has also been reported that ASML expects U.S. and Dutch export rules to reduce sales of certain mid-range systems to China by around 10% to 15% in 2024.
So the “cast maker” isn’t just an industrial supplier. It’s a strategic asset that governments worry about, regulate, and sometimes treat like a national security perimeter.
The second kingmaker: you can’t deploy what you can’t explain
The first kingmaker is about manufacturing capability. The second kingmaker is about using it.
The Dataiku / Harris Poll survey of 812 “Data Leaders” across the United States, United Kingdom, France, Germany, UAE and APAC captures the tension in blunt numbers. It suggests enterprise adoption is moving quickly, but control is still catching up.
Some figures that matter for the spine of this story:
86% report moderate or extensive use of AI agents in day-to-day operations, and 42% say agents are extensive, critical and embedded in dozens of daily operations.
Only 19% always require AI systems to “show their work” before approving them for production use.
95% do not believe they could trace an AI decision for regulators 100% of the time.
52% have delayed or blocked an AI agent’s deployment due to explainability concerns.
66% are less than fully confident that their AI agents could pass a basic audit.
59% report real-world issues caused by AI hallucinations or inaccuracies this year.
And despite all of this, 74% would accept no more than a 6% error threshold before reverting to human-managed processes.
That is not a market that has “solved” AI. That is a market that has installed AI and is now discovering the cost of turning it into a dependable utility.
Kurt Muehmel, Head of AI Strategy, Dataiku, describes the practical workaround enterprises are reaching for: if the core model is a black box, the system around it cannot be.
“It is true that you cannot fully explain why a particular LLM input leads to a specific output. In many use cases, the practical answer is to make the overall system transparent even if one component is a ‘black box.’ If you know what went in, what was asked, what came out, and how the output is used, you can decide whether that level of opacity is acceptable for that step, within a broader chain that is traceable.”⁹
Then he makes the deeper claim, the one that links back to the “cast maker” metaphor: enterprises are building an additional layer on top of models and compute because they need something closer to an operating system for AI decisions.
“Over time, enterprises abstracted compute through the cloud and simplified data management through data platforms. The next need is a layer that stitches together different types of intelligence, LLMs, predictive models, statistics, and human expertise, with enterprise data, into a governed process that runs from problem definition through execution.”
The gold rush isn’t only about who owns the mine or who sells the shovel. It’s also about who controls the rails the shovel has to run on inside a business.
Ownership isn’t one thing. It’s a stack.
If the consumer narrative of AI is “ask the model a question,” the enterprise narrative is “put the model inside a workflow and hope it behaves.”
Tim Pfaelzer, General Manager and Senior Vice President EMEA at Veeam, breaks the ownership question into layers: infrastructure, models, and then the enterprise that actually deploys AI into operational reality.
“Start with the hardware and cloud providers. They enable scale. Without them, you don’t run these models at the level people expect. But that doesn’t mean they own the intellectual property of what’s generated, and they shouldn’t be treated as the party that controls the content or carries the compliance obligation for the output,” he said.
“Then you have model developers. They own the algorithms, yes, but even that is tricky, because models are only as good as the data they’re trained on and the data they’re fed. So model developers can’t claim full ownership of what happens in practice when they don’t control the data conditions the model is operating under,” he added.
“And then you have the enterprise layer, the organisations that actually implement AI and put it into workflows. This is where governance frameworks, privacy enforcement, human oversight, and operational continuity sit. That’s also where the accountability really becomes tangible.”
This is where the ideas fuse cleanly:
ASML is the upstream kingmaker that influences who can build at the frontier of compute.
Enterprise governance is the downstream kingmaker that determines whether AI can be used safely enough to scale.
A reader takeaway can be brutally simple: the AI economy is not a straight line from models to value. It’s a chain of dependencies, and money and power accrue to the points the stack can’t avoid.
The real “ownership” fight is about identity and context
Jessica Constantinidis, Innovation Officer, EMEA at ServiceNow, pushes the ownership question away from a single company and toward a model that looks more like segmented identity.
“I think it has to be shared responsibility. The reason I say that is because the way we’re heading, ‘ownership’ won’t be a single line item like ‘the chipmaker owns it’ or ‘the platform owns it’. It will be layered across identity, context and governance,” she said.
Then she describes a future that reads less like a product roadmap and more like a social contract: multiple “LLM profiles” acting as separate compartments of your life, each with different ownership rules and guardrails.
“Whatever you think you know about how you’ll interact with AI, reset it, because you won’t have one assistant. You’ll have multiple LLM profiles you interact with, almost like different entry points, each holding a different slice of you and your life,” she said.
“Experts believe you’ll have four LLM profiles on your device. One: your government LLM. Two: your work LLM. Three: your family LLM. Four: your leisure LLM. So when you ask ‘who owns the model’, my answer is: it depends which of those identities it serves and what rights and guardrails exist around it.”⁸
That frames the enterprise governance problem in human terms: AI is not just a tool sitting on a server. It’s a set of identities, permissions, contracts and contexts. The question isn’t “who owns the model,” it’s “who owns the conditions under which the model is allowed to act.”
Why the hidden kingmakers matter now
ASML matters because in an era where chip demand is being rewritten by AI, its lithography dominance behaves like a gravitational field. It bends everything around it: supply chains, industrial policy, and geopolitical bargaining power.¹⁴
Enterprise governance matters because adoption is now outpacing control. The Dataiku numbers show leaders deploying agents while admitting they cannot fully trace decisions, cannot always demand explainability, and routinely stall deployments due to explainability concerns.
Pfaelzer’s version of this reality is practical, contractual and unromantic:
“At Veeam, trusted accountability is a non-negotiable element in the AI era. That means we start by treating data integrity and governance as foundational, not optional,” he said.⁷
Then he spells out what “ownership” looks like when it stops being a debate and starts being liability: “contracts matter,” “vendors” must meet strict standards, and “resilience and rollback” has to exist because “if AI goes wrong, we need precision rollback for continuous compliance.”
In other words, the enterprise layer is where the gold rush finally meets gravity.