Article Details
Scrape Timestamp (UTC): 2025-02-24 21:37:27.073
Original Article Text
Click to Toggle View
OpenAI bans ChatGPT accounts used by North Korean hackers. OpenAI says it blocked several North Korean hacking groups from using its ChatGPT platform to research future targets and find ways to hack into their networks. "We banned accounts demonstrating activity potentially associated with publicly reported Democratic People's Republic of Korea (DPRK)-affiliated threat actors," the company said in its February 2025 threat intelligence report. "Some of these accounts engaged in activity involving TTPs consistent with a threat group known as VELVET CHOLLIMA (AKA Kimsuky, Emerald Sleet), while other accounts were potentially related to an actor that was assessed by a credible source to be linked to STARDUST CHOLLIMA (AKA APT38, Sapphire Sleet)." The now-banned accounts were detected using information from an industry partner. In addition to researching what tools to use during cyberattacks, the threat actors used ChatGPT to find information on cryptocurrency-related topics, which are common interests linked to North Korean state-sponsored threat groups. The malicious actors also used ChatGPT for coding assistance, including help on how to use open-source Remote Administration Tools (RAT), as well as debugging, researching, and development assistance for open-source and publicly available security tools and code that could be used in Remote Desktop Protocol (RDP) brute force attacks. OpenAI threat analysts also found that the North Korean actors revealed staging URLs for malicious binaries unknown to security vendors at the time while debugging auto-start extensibility point (ASEP) locations and macOS attack techniques. These staging URLs and the associated compiled executable files were submitted to an online scanning service to facilitate sharing with the broader security community. As a result, some vendors now reliably detect these binaries, protecting potential victims from future attacks. Other malicious activity uncovered by OpenAI while researching in what ways the North Korean threat actors used the banned accounts includes but is not limited to: The company also banned accounts linked to a potential North Korean IT worker scheme, described as having all the characteristics of efforts to obtain income for the Pyongyang regime by tricking Western companies into hiring North Koreans. "After appearing to gain employment they used our models to perform job-related tasks like writing code, troubleshooting and messaging with coworkers," OpenAI explained. "They also used our models to devise cover stories to explain unusual behaviors such as avoiding video calls, accessing corporate systems from unauthorized countries or working irregular hours." Since October 2024, when it published its previous report, OpenAI has also detected and disrupted two campaigns originating from China, "Peer Review" and "Sponsored Discontent." These campaigns used the ChatGPT models to research and develop tools linked to a surveillance operation and generate anti-American, Spanish-language articles. In the October report, OpenAI revealed that since the beginning of 2024, it disrupted over twenty campaigns linked to cyber operations and covert influence operations associated with Iranian and Chinese state-sponsored hackers.
Daily Brief Summary
OpenAI identified and banned multiple North Korean hacker accounts using ChatGPT to target entities and conduct research on hacking methods.
Detected activities align with known North Korean state-sponsored groups, including VELVET CHOLLIMA and STARDUST CHOLLIMA, which are involved in cyber espionage and financial theft.
Banned accounts were used for coding assistance and the development of tools for potential cyberattacks, including Remote Administration Tools and Remote Desktop Protocol brute force attacks.
Hackers also exploited ChatGPT to gather information on cryptocurrencies and debug malicious software, including staging URLs for unknown binaries.
Some of these activities involved the creation and dissemination of debugging information that has since helped security vendors to better detect malicious binaries.
In addition to direct hacking efforts, North Korean threat actors reportedly used ChatGPT to support employment scams aimed at infiltrating Western companies to indirectly generate revenue for Pyongyang.
OpenAI has expanded its monitoring and disruption of adverse activities by state actors, including campaigns from China focused on surveillance and disinformation.