Weaponised AI is turning cybercrime into a subscription business

Dark web criminals are doing with AI what every fast-moving industry does with new infrastructure: standardising it, pricing it, packaging it, and selling it to anyone who can pay.

That is the picture in a new whitepaper from Group-IB, Weaponized AI: Inside the criminal ecosystem fuelling the fifth wave of cybercrime, published on January 20, 2026. The company says underground discussions that mention AI-powered cybercrime have surged 371% since 2019, alongside a ten-fold jump in replies (up 1,199%). The implication is not just more chatter. It is faster learning, better “how-to” knowledge transfer, and a larger pool of people confident enough to try.

Group-IB frames this as the “fifth wave” of cybercrime: a shift from attacks that rely heavily on skilled humans to attacks that rely on commoditised systems. In earlier eras, cybercrime moved from manual phishing in the late 1990s to industrial ransomware, then to supply-chain and ecosystem attacks in the early 2020s. Now, the company argues, the same logic is being applied to persuasion, impersonation, and malware development — the hard parts of crime that used to depend on talent and time.

The most telling line in the report is not about a specific tool. It is about access. Group-IB says adversaries are “industrialising AI, turning once specialist skills such as persuasion, impersonation and malware development into on-demand services available to anyone with a credit card.”

From novelty to infrastructure

Group-IB’s monitoring of dark web forums and underground marketplaces found AI abuse dominating conversations in 2025, with 23,621 first posts and 298,231 replies. Interest, the company says, peaked in 2023 after ChatGPT’s public release in late 2022, with reply volumes rising sharply around the release of GPT-4 and growing regulatory and public concern.

The speed matters. Past cybercrime shifts took time: new malware families, new monetisation methods, new criminal “playbooks.” Group-IB argues this one is moving faster because it rides on consumer-grade AI products, public tutorials, and a market willing to turn anything into a subscription.

“Unlike earlier waves of cybercrime, AI adoption by threat actors has been strikingly fast,” the whitepaper notes, adding that AI is “firmly embedded as core infrastructure throughout the criminal ecosystem rather than an occasional exploit.”

Crimeware priced like entertainment

One of the report’s more sobering claims is how cheap some of this tooling has become. Group-IB says vendors are selling AI-enabled crimeware in a way that mirrors legitimate SaaS businesses: tiered pricing, subscription models, and even customer support.

In other words, this is not just criminals using AI. It is criminals building businesses around AI.

Group-IB groups most underground AI offerings into three buckets: LLM exploitation, phishing and social engineering automation, and malware and tooling. And it says these tools are often bundled, making it easier for low-skill buyers to purchase a “starter kit” rather than assembling capabilities themselves.

The “operating systems” of modern cybercrime

The report points to three specific markets that help explain why AI changes the economics of cybercrime: dark LLMs, jailbreak services, and deepfake-as-a-service.

Dark LLMs. Group-IB says threat actors are moving beyond simply misusing mainstream chatbots and are creating proprietary “Dark LLMs” designed to be stable, capable, and free of ethical restrictions. It identifies at least three active vendors offering subscriptions from $30 to $200 per month, with a customer base exceeding 1,000 users.

That $30 figure matters less as a number and more as a signal: the barrier to entry is collapsing. If a usable malicious model costs roughly the same as a streaming subscription, the market naturally widens — and the average competence of the user base no longer limits the damage they can do.

Jailbreak frameworks. Jailbreaking, in this context, means reusable prompts, templates, and instructions designed to bypass safety controls in legitimate LLMs and coax them into producing disallowed content. Group-IB says that by the end of Q3 2025, the volume of posts about jailbreak frameworks and instructions almost equalled the total volume for all of 2024, with discussions focusing predominantly on ChatGPT and OpenAI models.

Even when criminals do not rely on a fully “dark” model, the jailbreak market provides an alternative: keep using mainstream tools, but learn how to push them past their limits.

Deepfake-as-a-service. The report describes a thriving marketplace for “synthetic identity kits” that can include AI video actors, cloned voices, and even biometric datasets for as little as $5. Group-IB says it detected and exported more than 300 dark web posts from 2022 to September 2025 referencing “deepfake” and “KYC,” and that 2024 saw the largest year-on-year spike in deepfake services for sale (233%). The trend did not reverse in 2025, with a further 52% increase and more unique usernames participating in those discussions.

The report adds that attackers are harvesting samples with as little as 10 seconds of audio — sourced from social media, webinars, or past phone calls — to create convincing voice clones.

Why this wave feels different

The point is not that cybercriminals have new motives. It is that they have new leverage.

Craig Jones, former INTERPOL Director of Cybercrime and an independent strategic advisor, puts it bluntly: “AI has industrialized cybercrime. What once required skilled operators and time can now be bought, automated, and scaled globally.”

Jones argues that the driver remains familiar — “money, leverage, and access still drives the ecosystem” — but the execution has changed. “It has dramatically increased the speed, scale, and sophistication with which those motives are pursued,” he said, calling it “a new era, where speed, volume, and sophisticated impersonation has fundamentally changed how crime is committed and how hard it is to stop.”

This is the uncomfortable reality defenders are moving toward: attacks that look “human” but are produced in bulk, and attackers who can iterate quickly because they are buying tested products rather than building from scratch.

The defender’s problem: less trace, more noise

Group-IB’s warning is not just about the criminals. It is about the asymmetry. Traditional malware often leaves artefacts defenders can hunt: payload signatures, infrastructure reuse, logs that connect campaigns. AI-enabled attacks can be lighter, more personalised, and harder to attribute — especially when the “weapon” is language, voice, or video rather than a binary.

“Unlike traditional malware, AI-enabled attacks leave little forensic trace, making detection and attribution harder,” Group-IB says, arguing for “intelligence-led security strategies” that focus on adversary behaviour, predictive threat intelligence, fraud prevention, and visibility into underground ecosystems. It also stresses cross-border collaboration between private companies, law enforcement, and regulators — because the markets it is describing do not respect jurisdiction.

Dmitry Volkov, Group-IB’s CEO, describes the change from the standpoint of scale: “From the frontlines of cybercrime, AI is giving criminals unprecedented reach. Today, AI is enabling criminals to scale scams with ease and create hyper-personalisation and social engineering to a new standard.”

His bigger concern is what comes next: “In the near future, autonomous AI will carry out attacks that once required human expertise,” Volkov said. “Understanding this shift is essential to stopping the next generation of threats and ensuring defenders outpace attackers, moving towards an intelligence-led security strategy that combines AI-driven detection, fraud prevention, and deep visibility into underground criminal ecosystems.”

What is actually happening underneath the narrative

There is an easy way to read a report like this: AI is scary, criminals are adopting it, panic.

A more useful reading is structural: cybercrime is becoming a product economy powered by general-purpose AI. The “fifth wave” is not one new technique. It is the normalisation of criminal capability as a subscription service.

When persuasion is automated, impersonation is cheap, and malware tooling is packaged with customer support, the bottleneck shifts. The limiting factor is no longer skill. It is intent, access to targets, and the defender’s ability to recognise patterns in a world where every message can be unique but still machine-made.

That is why this matters. The future of cybercrime may not look like better hackers. It may look like better distribution.

If Group-IB is right, the question for 2026 is not whether criminals will use AI. They already are. The question is whether defenders and regulators can adapt to an underground economy that is learning, scaling, and pricing itself like software.

And whether the rest of us — employees, customers, citizens — can learn to treat “realistic” as a risk factor, not a reassurance.

Next
Next

Stitch report flags vendor sprawl as Gulf finance’s “hidden culprit”