Article Details
Scrape Timestamp (UTC): 2025-02-01 17:21:57.037
Original Article Text
Click to Toggle View
Google says hackers abuse Gemini AI to empower their attacks. Multiple state-sponsored groups are experimenting with the AI-powered Gemini assistant from Google to increase productivity and to conduct research on potential infrastructure for attacks or for reconnaissance on targets. Google's Threat Intelligence Group (GTIG) detected government-linked advanced persistent threat (APT) groups using Gemini primarily for productivity gains rather than to develop or conduct novel AI-enabled cyberattacks that can bypass traditional defenses. Threat actors have been trying to leverage AI tools for their attack purposes to various degrees of success as these utilities can at least shorten the preparation period. Google has identified Gemini activity associated with APT groups from more than 20 countries but the most prominent ones were from Iran and China. Among the most common cases were assistance with coding tasks for developing tools and scripts, research on publicly disclosed vulnerabilities, checking on technologies (explanations, translation), finding details on target organizations, and searching for methods to evade detection, escalate privileges, or run internal reconnaissance in a compromised network. APTs using Gemini Google says APTs from Iran, China, North Korea, and Russia, have all experimented with Gemini, exploring the tool's potential in helping them discover security gaps, evade detection, and plan their post-compromise activities. These are summarized as follows: Google also mentions having observed cases where the threat actors attempted to use public jailbreaks against Gemini or rephrasing their prompts to bypass the platform's security measures. These attempts were reportedly unsuccessful. OpenAI, the creator of the popular AI chatbot ChatGPT, made a similar disclosure in October 2024, so Google's latest report comes as a confirmation of the large-scale misuse of generative AI tools by threat actors of all levels. While jailbreaks and security bypasses are a concern in mainstream AI products, the AI market is gradually filling with AI models that lack proper the protections to prevent abuse. Unfortunately, some of them with restrictions that are trivial to bypass are also enjoying increased popularity. Cybersecurity intelligence firm KELA has recently published the details about the lax security measures for DeepSeek R1 and Alibaba's Qwen 2.5, which are vulnerable to prompt injection attacks that could streamline malicious use. Unit 42 researchers also demonstrated effective jailbreaking techniques against DeepSeek R1 and V3, showing that the models are easy to abuse for nefarious purposes.
Daily Brief Summary
Google’s Threat Intelligence Group identified multiple state-backed APTs using Gemini AI to enhance attack preparation and research.
Prominent APTs from Iran and China, among others from over 20 countries, have used Gemini for tasks like code scripting and vulnerability research.
The AI tool is utilized mainly for productivity improvements rather than creating advanced, novel AI-driven cyber threats.
Threat actors leverage Gemini for tasks such as language translation, technology explanation, and gathering intelligence on potential targets.
APTs have also explored using Gemini to find methods for evasion, privilege escalation, and conducting reconnaissance within networks.
Google noted attempts to bypass Gemini’s security with jailbreaks and rephrased prompts, which were unsuccessful.
OpenAI reported similar misuse of their AI tools, pointing to a growing trend of generative AI misuse in cyber activities.
Concerns rise as the market sees an influx of AI models with insufficient security measures, making them susceptible to exploitation.