Article Details
Scrape Timestamp (UTC): 2024-10-16 11:27:30.229
Source: https://thehackernews.com/2024/10/from-misuse-to-abuse-ai-risks-and.html
Original Article Text
Click to Toggle View
From Misuse to Abuse: AI Risks and Attacks. AI from the attacker's perspective: See how cybercriminals are leveraging AI and exploiting its vulnerabilities to compromise systems, users, and even other AI applications Cybercriminals and AI: The Reality vs. Hype "AI will not replace humans in the near future. But humans who know how to use AI are going to replace those humans who don't know how to use AI," says Etay Maor, Chief Security Strategist at Cato Networks and founding member of Cato CTRL. "Similarly, attackers are also turning to AI to augment their own capabilities." Yet, there is a lot more hype than reality around AI's role in cybercrime. Headlines often sensationalize AI threats, with terms like "Chaos-GPT" and "Black Hat AI Tools," even claiming they seek to destroy humanity. However, these articles are more fear-inducing than descriptive of serious threats. For instance, when explored in underground forums, several of these so-called "AI cyber tools" were found to be nothing more than rebranded versions of basic public LLMs with no advanced capabilities. In fact, they were even marked by angry attackers as scams. How Hackers are Really Using AI in Cyber Attacks In reality, cybercriminals are still figuring out how to harness AI effectively. They are experiencing the same issues and shortcomings legitimate users are, like hallucinations and limited abilities. Per their predictions, it will take a few years before they are able to leverage GenAI effectively for hacking needs. For now, GenAI tools are mostly being used for simpler tasks, like writing phishing emails and generating code snippets that can be integrated into attacks. In addition, we've observed attackers providing compromised code to AI systems for analysis, as an effort to "normalize" such code as non-malicious. Using AI to Abuse AI: Introducing GPTs GPTs, introduced by OpenAI on November 6, 2023, are customizable versions of ChatGPT that allow users to add specific instructions, integrate external APIs and incorporate unique knowledge sources. This feature enables users to create highly specialized applications, such as tech support bots, educational tools, and more. In addition, OpenAI is offering developers monetization options for GPTs, through a dedicated marketplace. Abusing GPTs GPTs introduce potential security concerns. One notable risk is the exposure of sensitive instructions, proprietary knowledge, or even API keys embedded in the custom GPT. Malicious actors can use AI, specifically prompt engineering, to replicate a GPT and tap into its monetization potential. Attackers can use prompts to retrieve knowledge sources, instructions, configuration files, and more. These might be as simple as prompting the custom GPT to list all uploaded files and custom instructions or asking for debugging information. Or, sophisticated like requesting the GPT to zip one of the PDF files and create a downloadable link, asking the GPT to list all its capabilities in a structured table format, and more. "Even protections that developers put in place can be bypassed and all knowledge can be extracted," says Vitaly Simonovich, Threat Intelligence Researcher at Cato Networks and Cato CTRL member. These risks can be avoided by: AI Attacks and Risks There are multiple frameworks existing today to assist organizations that are considering developing and creating AI-based software: LLM Attack Surface There are six key LLM (Large Language Model) components that can be targeted by attackers: Real-World Attacks and Risks Let's wrap up with some examples of LLM manipulations, which can easily be used in a malicious manner. Summing Up: AI in Cyber Crime AI is a powerful tool for both defenders and attackers. As cybercriminals continue to experiment with AI, it's important to understand how they think, the tactics they employ and the options they face. This will allow organizations to better safeguard their AI systems against misuse and abuse. Watch the entire masterclass here.
Daily Brief Summary
AI exploitation is increasingly becoming a focus for cybercriminals, though its use is more hype than reality based on sensational media reports.
Current AI tools accessible to hackers are typically enhanced models of publicly available large language models, with no superior capabilities and often marked as scams.
Cybercriminals use AI primarily for generating phishing emails, writing code snippets, and trying to deceive AI systems into accepting malicious code as benign.
The introduction of customizable GPTs (General Purpose Technologies) by OpenAI presents new vulnerabilities, such as the exposure of sensitive data through embedded instructions and API keys.
Through prompt engineering, attackers manipulate GPTs to access and leak proprietary information or configure the GPTs for malicious purposes.
Despite potential risks, there are existing frameworks that help organizations safeguard their AI developments from cyber threats.
Understanding criminal tactics and potential misuse of AI is crucial for developing better defensive strategies against AI-driven cyberattacks.