Article Details
Scrape Timestamp (UTC): 2024-02-15 15:58:42.407
Original Article Text
Click to Toggle View
OpenAI blocks state-sponsored hackers from using ChatGPT. OpenAI has removed accounts used by state-sponsored threat groups from Iran, North Korea, China, and Russia, that were abusing its artificial intelligence chatbot, ChatGPT. The AI research organization took action against specific accounts associated with the hacking groups that were misusing its large language model (LLM) services for malicious purposes after receiving key information from Microsoft's Threat Intelligence team. In a separate report, Microsoft provides more details on how and why these advanced threat actors used ChatGPT. Activity associated with the following threat groups was terminated on the platform: Generally, the threat actors used the large language models to enhance their strategic and operational capabilities, including reconnaissance, social engineering, evasion tactics, and generic information gathering. None of the observed cases involve the use of LLMs for directly developing malware or complete custom exploitation tools. Instead, actual coding assistance concerned lower-level tasks such as requesting evasion tips, scripting, turning antivirus off, and generally the optimization of technical operations. In January, a report from the United Kingdom's National Cyber Security Centre (NCSC) predicted that by 2025 the operations of sophisticated advanced persistent threats (APTs) will benefit from AI tools across the board, especially in developing evasive custom malware. Last year, though, according to OpenAI's and Microsoft's findings, there was an uplift in APT attack segments like phishing/social engineering, but the rest was rather exploratory. OpenAI says it will continue to monitor and disrupt state-backed hackers using specialized monitoring tech, information from industry partners, and dedicated teams tasked with identifying suspicious usage patterns. "We take lessons learned from these actors' abuse and use them to inform our iterative approach to safety," reads OpenAI's post. "Understanding how the most sophisticated malicious actors seek to use our systems for harm gives us a signal into practices that may become more widespread in the future, and allows us to continuously evolve our safeguards," the company added.
Daily Brief Summary
OpenAI has deactivated accounts of state-backed threat groups from Iran, North Korea, China, and Russia that were abusing ChatGPT.
The actions were taken after collaboration with Microsoft's Threat Intelligence team, which helped identify the malicious use of OpenAI's services.
Threat groups utilized ChatGPT for various nefarious activities such as reconnaissance, social engineering, and developing tactics to evade detection.
While there has been an increase in the use of AI tools for phishing and social engineering, there was no direct evidence of these tools being used to write malware or build sophisticated cyber attack tools.
The UK's NCSC had forecasted in January that by 2025, AI tools would become instrumental for APT groups in creating advanced malware.
OpenAI is employing specialized monitoring technology and information sharing with partners to detect and prevent misuse by sophisticated actors.
OpenAI emphasizes the importance of learning from these incidents to improve security measures and prepare for potential future widespread malicious activities.