“There is no enterprise AI without data security”: the Rehan Jalil thesis behind Veeam’s Securiti AI deal
Rehan Jalil doesn’t talk like a man who has just notched another win. He talks like a man trying to stop a quiet collapse.
Not collapse as in a single breach or a headline incident. Collapse as in organisations losing the ability to say what their data is, where it lives, who is touching it and what an autonomous system just did to it. Jalil is the sort of founder who builds for the moment you realise your tools are not just inadequate, they’re incompatible.
His latest milestone is his biggest: Veeam’s acquisition of Securiti AI, the company Jalil founded to map, govern and control enterprise data across systems that were never designed to agree with one another. It is his third major acquisition, after two earlier companies built for two earlier platform shifts, and it comes with the kind of mandate that only exists when a market decides it can’t keep improvising.
Jalil’s view is that the agent era does not simply add another product category. It changes the operational physics. When AI agents scale, when workflows run without a human watching every step, the enterprise problem stops being only about blocking the bad thing. It becomes about reversing the bad thing, fast.
“In the world that’s emerging now, especially with AI agents, both have to exist together,” he says. “Agents are like self-driving cars: they’ll make mistakes. Some mistakes will be accidental, some will be intentional, and some will be things you don’t even know happened because the system acted autonomously.”
“Large enterprises don’t have ‘a few’ agents,” he says. “They will have thousands, potentially hundreds of thousands of agents and workflows, across teams, across functions. You need to know what those agents are doing, whether they’re making mistakes, whether they’re leaking information, whether they’re accessing what they shouldn’t.”
Then the line that explains why he thinks security and resilience are being forced into the same room.
“Even with the best governance, something will go wrong,” he says. “When it does, you need to undo it, restore, roll back, recover. That has to come together in one room, because it’s the same underlying information asset. Protect it, govern it and if it changes in the wrong way, bring it back.”
The pattern he refuses to call a pattern
Jalil has a career that reads like a sequence of infrastructure bets, the kind that look obvious only in hindsight. WiChorus, founded in the telecom era, helped carriers handle the shift from voice-first networks to data-first reality. Elastica arrived as enterprises rushed into SaaS, discovering that cloud adoption created a sprawling new attack surface and almost no shared visibility.
If you want the simplified version, it is this: new platform shift, new sprawl, new chaos, then Jalil building the control layer.
He dislikes the popular version of the story, the one that makes founders sound like prophets. “I wouldn’t over-mystify it as having some special recipe for waves,” he says. “A lot of it is simply depth. When you’ve spent years inside a domain, you see around corners a little earlier, not because you’re smarter, but because you’re closer to the nuances. You see the shades of grey that look like noise to outsiders. That can translate into a timing advantage.”
What he is really describing is obsession, not in the romantic startup way, but in the engineering way. He gets fixated on the point where everything breaks, then builds the layer that stops the breaking from spreading.
Why Securiti AI was built before the world said “agent”
Securiti AI begins with a straightforward premise: data is the asset, and everything else is a wrapper. “If you rewind to 2019, there was a clear realisation for us: we were fully in the information age, information is king,” Jalil says. “What we saw back then was that controls were fragmented, point solutions everywhere, no unified intelligence layer that could give you a full view of your data and then enforce the controls you actually need.”
He is also unequivocal about the “AI” in the company name. He does not present it as a marketing update.
“Even though ChatGPT didn’t exist publicly at that time, we had a strong conviction that AI would inevitably use enterprise data and that would require an even tighter set of controls,” he says. “That’s why we named the company ‘Securiti’ with ‘AI’ very much in mind from early on.”
The shift, as he sees it, is that enterprise data is no longer only stored and accessed. It is continuously acted on, summarised, transformed, routed, and inferred from. At speed. With imperfect judgment. With limited audit.
Unification is not a slogan; it’s a survival requirement
Jalil’s favourite word is "unification," and he uses it as a constraint, not a branding choice. “Unification means a customer shouldn’t have to mentally stitch together multiple systems to answer basic questions about their data and risk,” he says.
“Our aspiration is that the intelligence layer is unified under one knowledge graph, one place where you can see the entire continuum of information: where personal and sensitive data exists, what access and governance controls apply, what interactions are happening with AI systems and how resilience and recovery fit in if something goes wrong.”
He is dismissive of governance-as-theatre, the kind of approach that makes risk look managed without doing the hard work of understanding the data underneath.“It’s not just ‘put up a dashboard’ and block a few things,” he says. “If you feed garbage into AI, you get garbage out, and in enterprise settings, ‘garbage’ can mean outdated data, incorrect data, sensitive data, or data that that agent or that user shouldn’t use. So you need deep intelligence first.”
“Our approach is intelligence around data as the foundation, bringing data into the knowledge graph so you understand what it is, what risk it carries and what policy should apply,” he says. Then he goes a step further, toward what he thinks will become the new security perimeter: the interaction itself.
“You need to understand the conversations and exchanges happening with AI systems, ensure they are within the intended bounds of the agent and ensure usage remains safe and controlled,” he says. “The goal is not just to stop bad outcomes, but to enable safe use, cleanly and responsibly.”
Privacy as custodianship, not paperwork
Jalil’s language around privacy is unusually direct. He doesn’t treat it as an add-on or a compliance tax. He treats it as a moral and operational obligation that forces better systems.
“The company doesn’t own personal information in the moral sense; companies are custodians,” he says. “If a person requests deletion, the organisation has to know where the data is and delete it. That’s the law in many jurisdictions, and expectations are tightening globally.”
It is also, in his view, part of why legacy stacks fail in the agent era. If you cannot reliably locate and classify data, you cannot reliably control what an agent can access, or prove what it did, or undo what it changed.
Jalil has little patience for performative AI, for companies that say the word loudly and ship vague promises quietly. “You don’t get rewarded simply for using fashionable words,” he says. “If you say ‘AI’ a lot but don’t have a relevant product, don’t have real adoption and can’t demonstrate outcomes, you don’t get the premium you’re imagining. The reward comes when you’re solving a real problem and you can show you’re winning.” Then he states the constraint that keeps showing up in his worldview, like an equation that refuses to balance any other way.
“There is no enterprise AI without data security.”
For someone who has now reached a third major acquisition, Jalil talks less about arrival and more about obligation. He returns to a phrase his father once told him, a blessing that sounds sharp until he explains it.
“I think what he meant, and the way I interpret it, is: may you keep striving,” he says. “The idea isn’t to romanticise hardship. It’s that the drive to make things better, to bend the future toward something improved, shouldn’t stop just because you hit milestones.
That is the tension at the centre of his profile: a founder who keeps winning the moment, but builds like he is still trying to avert a disaster nobody wants to name. In the agent era, he bets that the winners will not be the companies that make AI feel magical. They will be the companies that make AI feel controllable, auditable and, when it inevitably goes wrong, reversible.
A third major acquisition can read like a finish line from the outside. Jalil doesn’t talk like someone celebrating the end of a chapter. He talks like someone being handed a larger responsibility.
“The milestone is delivering the unified vision from the customer’s point of view,” he says. “Customers should be able to come to one place, understand what data they have, apply guardrails to prevent bad things from happening, and in the same place be able to undo the damage if something goes wrong. That’s what the agent world will require.”
When I asked him about a line his father once told him — “May you always struggle” — he didn’t take it as hardship for its own sake. He heard it as a mandate to keep building.
“I think what he meant — and the way I interpret it — is: may you keep striving,” Jalil says. “The idea isn’t to romanticise hardship. It’s that the drive to make things better, to bend the future toward something improved, shouldn’t stop just because you hit milestones.”
It lands as a tidy summary of Jalil’s career: he keeps showing up at the moment organisations realise they can’t scale on improvisation. In telecom, in cloud security, and now in the age of agents, his bet is the same — that trust becomes infrastructure. And infrastructure, eventually, has to be unified.