Article Details
Scrape Timestamp (UTC): 2024-02-15 00:12:33.744
Source: https://www.theregister.com/2024/02/15/openai_microsoft_spying/
Original Article Text
Click to Toggle View
OpenAI shuts down China, Russia, Iran, N Korea accounts caught doing naughty things. You don't need us to craft phishing emails or write malware, super-lab sniffs. OpenAI has shut down five accounts it asserts were used by government agents to generate phishing emails and malicious software scripts as well as research ways to evade malware detection. Specifically, China, Iran, Russia, and North Korea were apparently "querying open-source information, translating, finding coding errors, and running basic coding tasks" using the super-lab's models. Us vultures thought that was the whole point of OpenAI's offerings, but seemingly these nations crossed a line by using these systems with harmful intent or being straight-up persona non-grata. The biz played up the terminations of service in a Wednesday announcement, stating it worked with its mega-backer Microsoft to identify and pull the plug on the accounts. “We disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” the OpenAI team wrote. Conversational large language models like OpenAI's GPT-4 can be used for things like extracting and summarizing information, crafting messages, and writing code. OpenAI tries to prevent misuse of its software by filtering out requests for harmful information and malicious code. The lab also low-key reiterated GPT-4 isn't that good at doing bad cyber-stuff anyway, mentioning in its announcement that the neural network, available via an API or ChatGPT Plus, "offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools." Microsoft’s Threat Intelligence team shared its own analysis of the malicious activities. That document suggests China's Charcoal Typhoon and Salmon Typhoon, which both have form attacking companies in Asia and the US, used GPT-4 to research information about specific companies and intelligence agencies. The teams also translated technical papers to learn more about cybersecurity tools - a job that, to be fair, is easily accomplished with other services. Microsoft also opined that Crimson Sandstorm, a unit controlled by the Iranian Armed Forces, sought via OpenAI's models methods to run scripted tasks, and evade malware detection, and tried to develop highly targeted phishing attacks. Emerald Sleet, acting on behalf of the North Korean government, queried the AI lab to search for information on defense issues relating to the Asia-Pacific region and public vulnerabilities on top of crafting phishing campaigns. Finally, Forest Blizzard, a Russian military intelligence crew also known as the notorious Fancy Bear team, researched open source satellite and radar imaging technology and looked for ways to automate scripting tasks. OpenAI previously downplayed its models’ ability to aid attackers, suggesting its neural nets "perform poorly" at crafting exploits for known vulnerabilities.
Daily Brief Summary
OpenAI identified and shut down five accounts associated with government agents from China, Iran, Russia, and North Korea, aimed at creating phishing emails and malicious software.
The terminated accounts include two China-affiliated threat actors Charcoal Typhoon and Salmon Typhoon, the Iran-affiliated Crimson Sandstorm, the North Korea-affiliated Emerald Sleet, and the Russia-affiliated Forest Blizzard.
These threat actors were allegedly using OpenAI's services for activities such as language translation, finding coding errors, and generating code, which could support cyberattacks and phishing campaigns.
OpenAI collaborated with Microsoft to detect and disable these malicious accounts and stressed the limited capabilities of GPT-4 in performing malicious cybersecurity tasks.
Microsoft’s Threat Intelligence provided additional details on the specific nature of activities conducted by these groups, such as translating technical papers and researching cybersecurity.
OpenAI emphasized that their systems are designed to prevent misuse and filter out requests for harmful information and malicious code, suggesting that their AI models are not particularly effective in aiding cybercrime.