The AI Labs Built the Threat. Now They Are Selling the Defence

On 11 May, as OpenAI launched Daybreak — its AI-powered vulnerability assessment initiative — Google's Threat Intelligence Group published a report confirming that it had thwarted a hacker attempt to use artificial intelligence to plan a mass-vulnerability exploitation operation. The group said it had high confidence that a criminal threat actor had used an AI model to find and exploit a zero-day vulnerability, creating a method to bypass two-factor authentication, before the attempt was disrupted. Groups linked to China and North Korea had demonstrated significant interest in using AI for vulnerability discovery, the report added.

The two announcements, arriving within hours of each other, captured the essential tension that has unsettled the security industry since April: the same AI capabilities being positioned as the solution to the problem are also accelerating the problem itself.

The sequence began on 7 April, when Anthropic unveiled Claude Mythos Preview and simultaneously said it would not release the model to the public. Over a period of weeks, the company said, Mythos had identified thousands of zero-day vulnerabilities across every major operating system and every major web browser, flaws previously unknown to the software's own developers, found at a scale and speed no human security team could match. Mozilla found 271 previously unknown vulnerabilities in Firefox alone using the model. Anthropic privately warned senior US government officials that Mythos makes large-scale AI-driven cyberattacks significantly more likely this year.

The response was Project Glasswing, a gated coalition giving Mythos Preview access to named partners including AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks, with access extended to over 40 additional organisations responsible for critical software infrastructure. Anthropic committed $100m in model usage credits to the initiative and priced Mythos Preview access at $25 per million input tokens and $125 per million output tokens for participants.

The industry's first reaction was to sell

The initial market response to the Mythos announcement was not relief. On 10 April, cybersecurity stocks fell between 7% and 14% over the course of the day as investors questioned whether the software-based defence model was structurally broken. The fear being priced in was whether AI-powered attackers had acquired a capability that traditional security platforms could not keep pace with, regardless of how well-funded those platforms were.

Nikesh Arora, chairman and chief executive of Palo Alto Networks, put the new reality plainly in a blog post at the time. "Imagine a horde of agents methodically cataloguing every weakness in your technology infrastructure, constantly," he wrote.

The sector has since partially recovered as the coalition's logic has become clearer. A significant percentage of surveyed CIOs told analysts they expected a positive impact on cyber budgets within the next year as a direct result of the Mythos announcement. Whether that optimism holds depends on a question the market has not yet resolved: whether AI-assisted security compresses demand for traditional security products or expands the overall market by surfacing vulnerabilities that previously went undetected.

OpenAI moves into the market Anthropic is defined

Daybreak is OpenAI's answer to a contest it did not start. Where Glasswing requires coalition membership, Daybreak opens the door wider: any organisation can request an assessment. The platform runs through Codex Security, OpenAI's coding agent, which builds an editable threat model from a company's software repository, automates monitoring for higher-risk vulnerabilities, and validates exploitability in an isolated environment. Three model tiers are available: GPT-5.5 for standard use, GPT-5.5 with Trusted Access for Cyber for verified defensive work, and GPT-5.5-Cyber for specialised authorised workflows.

OpenAI grounds the launch in a prior track record. The April release of GPT-5.4-Cyber, the first model the company classified as high-capability for cybersecurity tasks under its Preparedness Framework, contributed to fixing more than 3,000 vulnerabilities before Daybreak launched. "AI is already good and about to get super good at cybersecurity," Sam Altman wrote on X on 11 May. "We'd like to start working with as many companies as possible now to help them continuously secure themselves."

Daybreak's partner list is notably broad: Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Zscaler, Akamai, Fortinet, SentinelOne, Okta, Snyk, and others. The inclusion of CrowdStrike and Palo Alto — both already named Glasswing coalition members — is the detail that most clearly illustrates the commercial reality the incumbent security industry is navigating. The two largest cybersecurity companies by revenue have now signed to both initiatives simultaneously.

Partners, targets, or both

The structural complexity that both Glasswing and Daybreak create for the incumbent security industry is not easily resolved. CrowdStrike and Palo Alto's inclusion in both coalitions reflects a deliberate hedge: neither company can afford to sit out a partnership that gives a competitor early operational access to frontier AI security capabilities. Elia Zaitsev, CrowdStrike's chief technology officer, acknowledged the stakes directly when Glasswing launched. "What once took months now happens in minutes with AI," he said. "Adversaries will inevitably look to exploit the same capabilities."

Arora was more direct about what the Glasswing relationship means for Palo Alto's own positioning. "By prioritising defensive access to these powerful capabilities, Anthropic is helping us ensure that while intelligence is being weaponised, the defenders are the ones with the superior stack," he said. "AI becomes the defender."

Those are coalition partner statements, made at the announcement. The more considered question, which neither executive has answered in public, is what the relationship looks like in two years if Anthropic and OpenAI develop their own security operations products and begin competing directly for the enterprise security budgets that CrowdStrike and Palo Alto currently dominate. George Kurtz and Arora have both described AI as the largest driver of security demand since enterprises moved to the public cloud. The companies driving that demand are now also their most consequential new suppliers.

Discovery is not the same as remediation

One practical consequence of both initiatives that neither company's communications have addressed directly is the gap between finding vulnerabilities and fixing them. AI-assisted discovery at Mythos-class scale produces findings at a volume that most enterprise security teams are not operationally equipped to act on. Lucas Nelson of Lytical Ventures, speaking to researchers at the Council on Foreign Relations, described that asymmetry as "the defining cybersecurity challenge of the next decade" — discovery is accelerating exponentially while remediation still moves at human speed.

The downstream pressure on organisations outside both coalitions is also real. When a model finds a critical zero-day in the Linux kernel or a widely used open-source library, CVEs are eventually published and scanner signatures updated. Every organisation running that software then has a new critical finding to address. The volume of those findings will increase substantially as AI-assisted discovery becomes standard practice, regardless of which coalition produced them.

Neither company has answered the credibility problem

The dominant narratives around both launches were constructed largely by the companies making the announcements. Security researcher Bruce Schneier described the coverage of Mythos as reporters reproducing Anthropic's talking points without critical engagement. The core claim — that Mythos represents a watershed moment in AI-assisted vulnerability discovery — has been evaluated seriously by almost no one outside the Glasswing coalition, because almost no one outside the coalition has access to the model.

Anthropic is seeking a valuation of $900bn. The Glasswing coalition is simultaneously a safety mechanism and a commercial moat: the organisations with early access to Mythos are building operational relationships with Anthropic and feeding the data flywheel — vulnerability scanning data from over 40 top-tier clients — that will shape the model's next iteration. The model's inaccessibility is not just a safety decision. It also prevents independent evaluation of whether the technology does what Anthropic says it does.

OpenAI's position is not neutral. Daybreak is a market entry into enterprise security, and the safety language surrounding it serves the same dual purpose. The difference is one of degree, not of kind.

What Google's report today makes clear is that the threat is not hypothetical. AI is already being used to plan attacks at scale. That reality sits beneath both companies' announcements and gives the race its urgency, independent of the valuation pressures and competitive dynamics driving it.

What the race means for everyone else

For security teams outside both coalitions, the question is not which AI lab has the more impressive benchmark. It is what handing codebase access to an AI provider means operationally, which provider's workflow integrations connect to their existing security stack, and whether their internal processes can absorb a step change in vulnerability discovery volume.

Regulatory pressure is beginning to build. The April sell-off prompted renewed calls for secure-by-design mandates that shift liability for AI-generated vulnerabilities onto AI developers rather than the organisations using the software. No framework has moved to act on that pressure yet, but the direction is visible.

What Daybreak's launch makes clear is that the contest is now a contest. Anthropic moved first, structured the coalition, and set the terms. OpenAI has responded with broader access and a different commercial theory. The incumbent security vendors are partnering with both while managing the longer-term risk that both eventually come for their market. The companies writing the largest enterprise security cheques are positioning now. Most of the industry is still working out what that positioning means.

Sindhu V Kashyap

Global Technology Journalist & Multimedia Storyteller | Covering Founders, Investors & Leaders Reshaping Tech | Writer · Interviewer · Moderator · Editor

Next
Next

275 Million Students, One Point of Failure, and No Way Out