Article Details
Scrape Timestamp (UTC): 2024-02-14 14:43:09.872
Source: https://thehackernews.com/2024/02/microsoft-openai-warn-of-nation-state.html
Original Article Text
Click to Toggle View
Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyberattacks. Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to complement their ongoing cyber attack operations. The findings come from a report published by Microsoft in collaboration with OpenAI, both of which said they disrupted efforts made by five state-affiliated actors that used its AI services to perform malicious cyber activities by terminating their assets and accounts. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets' jobs, professional networks, and other relationships," Microsoft said in a report shared with The Hacker News. While no significant or novel attacks employing the LLMs have been detected to date, adversarial exploration of AI technologies has transcended various phases of the attack chain, such as reconnaissance, coding assistance, and malware development. "These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks," the AI firm said. For instance, the Russian nation-state group tracked as Forest Blizzard (aka APT28) is said to have used its offerings to conduct open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks. Some of the other notable hacking crews are listed below - Microsoft said it's also formulating a set of principles to mitigate the risks posed by the malicious use of AI tools and APIs by nation-state advanced persistent threats (APTs), advanced persistent manipulators (APMs), and cybercriminal syndicates and conceive effective guardrails and safety mechanisms around its models. "These principles include identification and action against malicious threat actors' use notification to other AI service providers, collaboration with other stakeholders, and transparency," Redmond said. ⚡ Free Risk Assessment from Vanta Generate a gap assessment of your security and compliance posture, discover shadow IT, and more.
Daily Brief Summary
Microsoft and OpenAI report that nation-state actors from Russia, North Korea, Iran, and China are incorporating AI into their cyber warfare tactics.
The collaborative efforts between the tech giants have led to the disruption of five state-affiliated cyber groups by terminating their AI service usage.
Misuse of large language models (LLMs) by attackers focuses on social engineering and deceptive communications that exploit professional relationships.
Although no breakthrough AI-driven cyberattacks have been observed, these actors are testing AI across multiple phases of cyber operations, including reconnaissance and malware development.
Notably, Russia's Forest Blizzard group used OpenAI's resources for research on satellite communications and scripting assistance, showcasing the diverse applications of AI in cyber espionage.
Microsoft is proactively developing principles to counteract the harmful use of AI tools by advanced persistent threats and cybercriminal organizations, emphasizing identification, notification, collaboration, and transparency.