Article Details

Scrape Timestamp (UTC): 2026-01-20 12:37:19.033

Source: https://www.theregister.com/2026/01/20/group_ib_ai_cycercrime_subscriptions/

Original Article Text

Click to Toggle View

For the price of Netflix, crooks can now rent AI to run cybercrime. Group-IB says crims forking out for Dark LLMs, deepfakes, and more at subscription prices. Cybercrime has entered its AI era, with criminals now using weaponized language models and deepfakes as cheap, off-the-shelf infrastructure rather than experimental tools, according to researchers at Group-IB. In its latest whitepaper, the cybersec biz argues that AI has become the plumbing of modern cybercrime, quietly turning skills that once took time and talent into services that anyone with a credit card and a Telegram account can rent. This isn't just a passing fad, according to Group-IB's numbers, which show mentions of AI on dark web forums up 371 percent since 2019, with replies rising even faster – almost twelvefold. AI-related threads were everywhere, racking up more than 23,000 new posts and almost 300,000 replies in 2025. According to Group-IB, AI has done what automation always does: it took something fiddly and made it fast. The stages of an attack that once needed planning and specialist hands can now be pushed through automated workflows and sold on subscription, complete with the sort of pricing and packaging you'd expect from a shady SaaS outfit. One of the uglier trends in the report is the rise of so-called Dark LLMs – self-hosted language models built for scams and malware rather than polite conversation. Group-IB says several vendors are already selling them for as little as $30 a month, with more than 1,000 users between them. Unlike jailbroken mainstream chatbots, these things are meant to stay out of sight, run behind Tor, and ignore safety rules by design. Running alongside the Dark LLM market is a booming trade in deepfakes and impersonation tools. Group-IB says complete synthetic identity kits, including AI-generated faces and voices, can now be bought for about $5. Sales spiked sharply in 2024 and kept climbing through 2025, pointing to a market that continues to grow. There's real damage behind the numbers, too. Group-IB says deepfake fraud caused $347 million in verified losses in a single quarter, including everything from cloned executives to fake video calls. In one case, the firm helped a bank spot more than 8,000 deepfake-driven fraud attempts over eight months. Group-IB found that scam call centers were using synthetic voices for first contact, with language models coaching the humans as they go. Malware developers are also starting to test AI-assisted tools for reconnaissance and persistence, with early hints of more autonomous attacks down the line. "From the frontlines of cybercrime, we see AI giving criminals unprecedented reach," said Anton Ushakov, head of Group-IB's Cybercrime Investigations Unit. "Today it helps scale scams with ease and hyper-personalization at a level never seen before. Tomorrow, autonomous AI could carry out attacks that once required human expertise." From a defensive point of view, AI removes a lot of the usual clues. When voices, text, and video can all be generated on demand with off-the-shelf software, it becomes much harder to work out who's really behind an attack. Group-IB's view is that this leaves static defenses struggling. In other words, cybercrime hasn't reinvented itself. It has just automated the old tricks, put them on subscription, and scaled them globally – and as ever, everyone else gets to deal with the mess.

Daily Brief Summary

CYBERCRIME // AI Tools Transform Cybercrime with Subscription-Based Dark LLMs and Deepfakes

Group-IB reports a significant rise in AI-driven cybercrime, with criminals renting AI tools like Dark LLMs and deepfakes at subscription prices, making sophisticated attacks more accessible.

AI discussions on dark web forums have surged 371% since 2019, with AI-related posts reaching over 23,000 and nearly 300,000 replies in 2025, indicating widespread interest and adoption.

Dark LLMs, designed for scams and malware, are available for as low as $30 a month, with over 1,000 users, enabling covert operations that bypass traditional safety measures.

The market for deepfakes and AI-generated identities is booming, with synthetic identity kits selling for about $5, contributing to $347 million in verified losses from deepfake fraud in one quarter.

AI tools are enhancing scam call centers by providing synthetic voices for initial contact and coaching humans, complicating attribution and defense efforts.

Malware developers are exploring AI-assisted reconnaissance and persistence, suggesting a future of more autonomous cyberattacks that previously required human expertise.

The integration of AI in cybercrime challenges static defenses, as AI-generated content obscures traditional indicators of compromise, complicating threat detection and response.