Article Details

Scrape Timestamp (UTC): 2026-02-12 07:02:32.305

Source: https://www.theregister.com/2026/02/12/google_china_apt31_gemini/

Original Article Text

Click to Toggle View

Google: China's APT31 used Gemini to plan cyberattacks against US orgs. Meanwhile, IP-stealing 'distillation attacks' on the rise. A Chinese government hacking group that has been sanctioned for targeting America's critical infrastructure used Google's AI chatbot, Gemini, to auto-analyze vulnerabilities and plan cyberattacks against US organizations, the company says. While there's no indication that any of these attacks were successful, "APT groups like this continue to experiment with adopting AI to support semi-autonomous offensive operations," Google Threat Intelligence Group chief analyst John Hultquist told The Register. "We anticipate that China-based actors in particular will continue to build agentic approaches for cyber offensive scale." In the threat-intel group's most recent AI Threat Tracker report, released on Thursday and shared with The Register in advance, Google attributes this activity to APT31, a Beijing-backed crew also known as Violet Typhoon, Zirconium, and Judgment Panda.  This goon squad was one of many exploiting a series of Microsoft SharePoint bugs over the summer, and in March 2024, the US issued sanctions against and criminally charged seven APT31 members accused of breaking into computer networks, email accounts, and cloud storage belonging to numerous high-value targets. The most recent attempts by APT31 to use Google's Gemini AI tool happened late last year, we're told.  "APT31 employed a highly structured approach by prompting Gemini with an expert cybersecurity persona to automate the analysis of vulnerabilities and generate targeted testing plans," according to the report.  The adversaries' adoption of this capability is so significant – it's the next shoe to drop In one case, the China-based gang used Hexstrike, an open source, red-teaming tool built on the Model Context Protocol (MCP) to analyze various exploits - including remote code execution, web application firewall (WAF) bypass techniques, and SQL injection - "against specific US-based targets," the Googlers wrote.  Hexstrike enables models, including Gemini, to execute more than 150 security tools with a slew of capabilities, including network and vulnerability scanning, reconnaissance, and penetration testing. Its intended use is to help ethical hackers and bug hunters find security weaknesses and collect bug bounties – but shortly after its release in mid-August, criminals began using the AI platform for more nefarious purposes. Moving closer to automated attacks Integrating Hexstrike with Gemini "automated intelligence gathering to identify technological vulnerabilities and organizational defense weaknesses," the AI threat tracker says, noting that Google has since disabled accounts linked to this campaign. "This activity explicitly blurs the line between a routine security assessment query and a targeted malicious reconnaissance operation." Google's report, which picks up where its November 2025 analysis left off, details how government-backed groups and cybercriminals alike are abusing Google's AI tools, along with the steps the Chocolate Factory has implemented to stop them. And it finds that attackers - just like everybody else on the planet - have a keen interest in agentic AI's capabilities to make their lives and jobs easier. "The adversaries' adoption of this capability is so significant - it's the next shoe to drop," Hultquist said. He explained there are two areas that Google is most concerned about. "One is the ability to operate across the intrusion," he said, noting the earlier Anthropic report about Chinese cyberspies abusing its Claude Code AI tool to automate most elements of attacks directed at high-profile companies and government organizations. In "a small number of cases," they even succeeded. "The other is automating the development of vulnerability exploitation," Hultquist said. "These are two ways where adversaries can get major advantages and move through the intrusion cycle with minimal human interference. That allows them to move faster than defenders and hit a lot of targets." Mind the patch gap In addition, using AI agents to find vulnerabilities and test exploits widens the patch gap - the time between the bug becoming known and a full working fix being deployed and implemented. "It's a really significant space currently," Hultquist said. "In some organizations, it takes weeks to put defenses in place." This requires security professionals to think differently about defense, using AI to respond and fix security weaknesses more quickly than humans can on their own. "We are going to have to leverage the advantages of AI, and increasingly remove humans from the loop, so that we can respond at machine speed," Hultquist noted. The latest report also found an increase in model extraction attempts - what it calls "distillation attacks" - and says both GTIG and Google DeepMind identified miscreants attempting to perform model extraction on Google's AI products. This is a type of intellectual property theft used to gain insights into a model's underlying reasoning and chain-of-thought processes.  "This is coming from threat actors throughout the globe," Hultquist said. "Your model is really valuable IP, and if you can distill the logic behind it, there's very real potential that you can replicate that technology – which is not inexpensive." This essentially gives criminals and shady companies the ability to accelerate AI model development at a much lower cost, and Google's report cites "model stealing and capability extraction emanating from researchers and private sector companies globally."

Daily Brief Summary

NATION STATE ACTIVITY // China's APT31 Exploits AI for Cyberattack Planning Against US Targets

Google reports China's APT31 used its AI chatbot, Gemini, to analyze vulnerabilities and plan cyberattacks on US organizations, enhancing their offensive capabilities.

APT31, also known as Violet Typhoon, has been sanctioned and criminally charged by the US for targeting critical infrastructure and high-value targets.

The group employed Hexstrike, a red-teaming tool, to automate vulnerability analysis and penetration testing, blurring lines between security assessments and malicious operations.

Google's response included disabling accounts linked to these activities, highlighting the challenges of AI misuse in cybersecurity.

The use of AI in cyber operations accelerates the intrusion cycle and widens the patch gap, necessitating faster defensive measures.

Model extraction attacks, or "distillation attacks," pose a threat to AI intellectual property, allowing adversaries to replicate technology at reduced costs.

The report stresses the need for leveraging AI in defense to keep pace with evolving threats and minimize human intervention in response efforts.