Article Details
Scrape Timestamp (UTC): 2024-10-12 14:09:37.601
Original Article Text
Click to Toggle View
OpenAI confirms threat actors use ChatGPT to write malware. OpenAI has disrupted over 20 malicious cyber operations abusing its AI-powered chatbot, ChatGPT, for debugging and developing malware, spreading misinformation, evading detection, and conducting spear-phishing attacks. The report, which focuses on operations since the beginning of the year, constitutes the first official confirmation that generative mainstream AI tools are used to enhance offensive cyber operations. The first signs of such activity were reported by Proofpoint in April, who suspected TA547 (aka "Scully Spider") of deploying an AI-written PowerShell loader for their final payload, Rhadamanthys info-stealer. Last month, HP Wolf researchers reported with high confidence that cybercriminals targeting French users were employing AI tools to write scripts used as part of a multi-step infection chain. The latest report by OpenAI confirms the abuse of ChatGPT, presenting cases of Chinese and Iranian threat actors leveraging it to enhance the effectiveness of their operations. Use of ChatGPT in real attacks The first threat actor outlined by OpenAI is 'SweetSpecter,' a Chinese adversary first documented by Cisco Talos analysts in November 2023 as a cyber-espionage threat group targeting Asian governments. OpenAI reports that SweetSpecter targeted them directly, sending spear phishing emails with malicious ZIP attachments masked as support requests to the personal email addresses of OpenAI employees. If opened, the attachments triggered an infection chain, leading to SugarGh0st RAT being dropped on the victim's system. Upon further investigation, OpenAI found that SweetSpecter was using a cluster of ChatGPT accounts that performed scripting and vulnerability analysis research with the help of the LLM tool. The threat actors utilized ChatGPT for the following requests: The second case concerns the Iranian Government Islamic Revolutionary Guard Corps (IRGC)-affiliated threat group 'CyberAv3ngers,' known for targeting industrial systems in critical infrastructure locations in Western countries. OpenAI reports that accounts associated with this threat group asked ChatGPT to produce default credentials in widely used Programmable Logic Controllers (PLCs), develop custom bash and Python scripts, and obfuscate code. The Iranian hackers also used ChatGPT to plan their post-compromise activity, learn how to exploit specific vulnerabilities, and choose methods to steal user passwords on macOS systems, as listed below. The third case highlighted in OpenAI's report concerns Storm-0817, also Iranian threat actors. That group reportedly used ChatGPT to debug malware, create an Instagram scraper, translate LinkedIn profiles into Persian, and develop a custom malware for the Android platform along with the supporting command and control infrastructure, as listed below. The malware created with the help of OpenAI's chatbot can steal contact lists, call logs, and files stored on the device, take screenshots, scrutinize the user's browsing history, and get their precise position. "In parallel, STORM-0817 used ChatGPT to support the development of server side code necessary to handle connections from compromised devices," reads the Open AI report. "This allowed us to see that the command and control server for this malware is a WAMP (Windows, Apache, MySQL & PHP/Perl/Python) setup and during testing was using the domain stickhero[.]pro." All OpenAI accounts used by the above threat actors were banned, and the associated indicators of compromise, including IP addresses, have been shared with cybersecurity partners. Although none of the cases described above give threat actors new capabilities in developing malware, they constitute proof that generative AI tools can make offensive operations more efficient for low-skilled actors, assisting them in all stages, from planning to execution.
Daily Brief Summary
OpenAI disrupted over 20 cyber operations that exploited its AI chatbot, ChatGPT, for malicious purposes including malware development and misinformation.
Proofpoint and HP Wolf detected initial misuse of AI tools by cybercriminals in creating sophisticated malware and scripts.
Chinese threat group "SweetSpecter" and Iranian groups "CyberAv3ngers" and "Storm-0817" used ChatGPT to enhance cyber operations targeting governments and critical infrastructures.
Uses of the AI included writing malware, scripting, vulnerability analysis, and phishing attacks to aid espionage and infrastructure disruption.
OpenAI identified and banned accounts linked to these threat actors and shared indicators of compromise with cyber security partners.
Although AI does not give new capabilities, it significantly enhances the efficiency of existing methods employed by cyber attackers, lowering the skill barrier for malicious activities.
The incidents highlight the dual-use nature of AI technologies and the importance of monitoring and controlling AI tool abuse in cybersecurity environments.