When Every AI Model Starts to Look the Same, Trust Becomes the Only Edge That Matters

There is a version of the AI governance conversation that goes in circles. Someone argues that regulation will kill innovation. Someone else argues that without oversight, things will go badly wrong. Both sides reach for their familiar ammunition, and nothing much moves.

Guru Sethupathy has sat at enough tables — academic, corporate, consultancy, startup — to have grown impatient with that loop. He spent years at McKinsey advising Fortune 100 boards on data and AI strategy, then moved inside Capital One to lead teams that actually built and governed AI systems at scale. He founded FairNow, an AI governance startup, which eventually became part of Optro, where he now runs AI governance as a product. The two decades add up to a particular kind of fluency: he understands the technology deeply enough to know where the risks actually sit, and he has worked inside enough organisations to know why they struggle to address them.

When we spoke, the EU AI Act was less than a year from its broad applicability window. The conversation ranged from whether AI is as useful as its proponents claim to what trust will actually mean once the hype settles.

The Source Code (TSC): Some people still look at AI and say it is overhyped — that it is not particularly useful today. How do you respond to that?

Guru Sethupathy (GS): That perspective usually comes from evaluating the technology based on its current state, without accounting for its trajectory.

What I would say is this: the capabilities of AI today are the worst they will ever be. From here, the only direction is improvement. If you look at where AI systems were in 2022 and compare that to where they are now, the progress has been extraordinary — far faster than what we typically see with human learning or skill development. The rate of change is not slowing.

So the mistake is anchoring your expectations to the present moment. This is not a static technology. Project forward even a few years, and the capabilities will be materially different. Judging the long-term potential of AI based on what it can or cannot do today is a bit like looking at the early internet in 1996 and concluding it had limited practical value.

TSC: There is a persistent argument that regulation and governance slow innovation down. What is your response to that?

GS: I think that argument is outdated, particularly in the context of AI. Let me explain it in a more grounded way.

Imagine a world with no traffic rules. No speed limits, no stop signs, no requirement to hold a licence, no age restrictions on who can drive. Would people feel safe in that environment? Most likely not. That's how I think about governance in the context of AI.

AI is not an incremental technology. It is fundamentally different in its behaviour. It is probabilistic rather than deterministic — it does not always produce the same output under the same conditions. It can respond differently depending on how it is prompted or how it interprets inputs. And now, with agents, you have systems that can take actions on behalf of users, make decisions, and interact with other systems. That introduces a level of autonomy we have not really dealt with in previous generations of software. This is a step change.

Because of that, governance is not in opposition to innovation. It is what makes innovation usable at scale. There is already significant confusion and distrust around AI. If you look at the leading models today — GPT, Claude, Gemini — they are increasingly similar in capability. Performance is becoming commoditised. The real differentiator becomes trust. And trust does not emerge on its own. It has to be built through governance, transparency, and accountability

TSC: The EU AI Act is expected to become broadly applicable around 2026. How should organisations — especially those operating across multiple regions — think about compliance?

GS: There are a couple of things to unpack. The first is that the timeline itself is not entirely fixed. August 2026 is what is currently being discussed, but there is a real possibility it gets pushed to 2027. The EU is already signalling that. So even in terms of timing, there is some uncertainty.

That said, I do not think the compliance landscape will be as fragmented as many organisations fear. What I expect is that the EU AI Act will function the way the GDPR did for data privacy — as a kind of standard-bearer. It will set a baseline, and other regions will build on top of it. You are already seeing Italy begin to shape domestic regulation around the EU framework. I would expect the Middle East and other regions to follow a similar pattern.

From a practical standpoint, companies tend to approach this in one of two ways. The first is to adopt the strictest regulation and apply it globally, which simplifies operations because you are essentially meeting the highest bar everywhere. The second is to use technology to manage the complexity. At Optro, we have built systems that ingest regulations across jurisdictions, map them to specific AI systems and use cases, and surface exactly what a company needs to do in each context. You are not manually interpreting legal text across thirty countries. The technology does that mapping for you.

TSC: With penalties under the EU AI Act reaching €35 million or 7% of global revenue, how should companies approach risk?

GS: The penalties are serious, and they are designed to ensure that compliance cannot be treated as a back-office problem. At that scale of exposure, it is very difficult for an organisation to ignore.

It is important not to focus exclusively on regulatory risk, as organisations are dealing with two distinct categories simultaneously. The first is regulatory — fines, legal exposure, compliance obligations. The second is business risk, which in many cases is more immediate and more directly felt.

There are three main types of business risk. The first is performance risk: the system does not do what it is supposed to do. It hallucinates, introduces bias, or fails to deliver the anticipated efficiency gains. The second is security risk — not just traditional data privacy concerns, but newer threats like model poisoning and adversarial attacks. There was an incident involving McKinsey in which an external agent penetrated an AI system and caused operational damage. That is not a regulatory issue. That is a business risk that directly affects trust and operations. The third is explainability risk: if something goes wrong and you cannot trace why the system behaved as it did, you have lost control.

So even if there were no regulations at all, companies would still need to govern their AI systems. The risks are inherent to the technology.

TSC: Do governments and regulators actually have the ability to keep pace with AI development?

GS: This is where I have genuine concerns, and I think they are grounded in structural realities rather than pessimism.

There are two main challenges. The first is speed. Governments are not designed to move quickly. Policy development involves consultation, iteration, and approval processes — all of which take time. The second is knowledge. AI is a highly complex domain, and the people who understand it most deeply are overwhelmingly working in the private sector, not in government.

Take the EU AI Act as an example. It was actively being discussed around 2023 and 2024. There were delays, revisions, and now we are looking at implementation around 2026, possibly later. But in that same window, the technology evolved considerably. The original framework did not meaningfully account for AI agents, simply because they were not as prominent at the time. Now, agents are becoming central to the deployment of AI, and regulators are trying to catch up through amendments and by leaning on external bodies such as ISO and NIST.

That highlights the structural issue. Regulation will always lag behind technological development. It is not just about how quickly policies can be written — it is about whether the people writing them have the depth of understanding to anticipate risks that do not yet exist at the time of drafting. Closing that gap will be one of the defining challenges in this space.

TSC: Are public-private partnerships essential to getting AI governance right?

GS: Yes, absolutely — and not in a superficial way. Regulation in this space cannot be developed effectively by governments in isolation. The technical knowledge and practical experience sit largely in the private sector. But that collaboration needs to be responsible. There are voices in the industry that argue against regulation entirely, and I do not think that is a sustainable position. We need private-sector leaders who engage constructively rather than reflexively oppose oversight.

Independent bodies play a critical bridging role. Organisations like ISO and initiatives like AI Verify in Asia are building frameworks that translate technical realities into policy structures. That kind of work is genuinely valuable.

But beyond institutions, there is a broader concern. I do not think the average person fully understands how much AI will change their life — not in an abstract sense, but concretely, in terms of jobs, education, relationships, and daily decisions. What we need is what I would describe as civic capability: a baseline understanding that allows people to engage with these systems thoughtfully and critically. Otherwise, you end up with a small number of companies effectively shaping the future of a technology that affects everyone. That is not a healthy outcome for anyone.

TSC: As AI systems scale, how should organisations approach transparency and internal capability?

GS: Transparency and explainability will be fundamental — both externally, for customers and regulators, and internally, for operations.

If an AI system makes decisions that affect individuals, those individuals will expect an explanation. Whether it is a hiring decision, a lending decision, or a healthcare outcome, people will want to understand why a particular result occurred. That is not an unreasonable expectation. It is a basic accountability requirement.

Internally, organisations need the ability to audit their systems. If something goes wrong, you need to trace what happened and understand how the system arrived at a decision. Without that, you have lost meaningful control. So you will see a growing emphasis on auditability — decision logs, tracking mechanisms, the ability to reconstruct how an output was generated.

The complication is that this has to be balanced with protecting intellectual property. Companies are not going to expose proprietary model logic, and rightly so. The boundary between what needs to be disclosed and what can remain private is not clearly defined yet, and it will likely be worked out through legal processes over time.

The internal capability piece is equally important and often underestimated. As AI becomes more embedded in operations, employees will increasingly be responsible for managing and governing these systems. That requires a real shift in mindset and a sustained investment in upskilling. People need to know how to evaluate outputs, understand risks, and intervene when necessary. This is as much an organisational challenge as a technical one.

TSC: Finally, what does trust actually mean in the AI economy today?

GS: We're still in the early stages of defining it.

Almost every company in this space talks about trust. It has become a central theme in how organisations position themselves. But the mechanisms to actually verify those claims are still evolving. Saying you are trustworthy is not the same as demonstrating it.

There are encouraging signs. You are starting to see formal frameworks — ISO 42001 is an early example — that provide companies with a structured way to demonstrate their governance practices. More organisations are publishing documentation on how their systems work, what they are intended to do, and their limitations. There is more emphasis on testing and validation.

But even with all of that, it is still early. The standards are not fully mature, consistency across organisations is limited, and expectations continue to shift as the technology itself evolves.

Trust is clearly becoming one of the most consequential factors in the AI ecosystem. But it has not been solved. It is actively being built and will continue to develop alongside the technology. The organisations that take it seriously — not as a positioning exercise but as a genuine operational commitment — will be better placed as this space matures. That gap between the ones who mean it and the ones who say it is still wide enough to matter.

Next
Next

Rajiv Ramaswami: The Quiet Force Behind One of Tech's Greatest Comebacks