Article Details
Scrape Timestamp (UTC): 2025-12-06 15:29:45.608
Source: https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html
Original Article Text
Click to Toggle View
Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks. Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline, among others. Of these, 24 have been assigned CVE identifiers. "I think the fact that multiple universal attack chains affected each and every AI IDE tested is the most surprising finding of this research," Marzouk told The Hacker News. "All AI IDEs (and coding assistants that integrate with them) effectively ignore the base software (IDE) in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives." At its core, these issues chain three different vectors that are common to AI-driven IDEs - The highlighted issues are different from prior attack chains that have leveraged prompt injections in conjunction with vulnerable tools (or abusing legitimate tools to perform read or write actions) to modify an AI agent's configuration to achieve code execution or other unintended behavior. What makes IDEsaster notable is that it takes prompt injection primitives and an agent's tools, using them to activate legitimate features of the IDE to result in information leakage or command execution. Context hijacking can be pulled off in myriad ways, including through user-added context references that can take the form of pasted URLs or text with hidden characters that are not visible to the human eye, but can be parsed by the LLM. Alternatively, the context can be polluted by using a Model Context Protocol (MCP) server through tool poisoning or rug pulls, or when a legitimate MCP server parses attacker-controlled input from an external source. Some of the identified attacks made possible by the new exploit chain is as follows - It's worth noting that the last two examples hinge on an AI agent being configured to auto-approve file writes, which subsequently allows an attacker with the ability to influence prompts to cause malicious workspace settings to be written. But given that this behavior is auto-approved by default for in-workspace files, it leads to arbitrary code execution without any user interaction or the need to reopen the workspace. With prompt injections and jailbreaks acting as the first step for the attack chain, Marzouk offers the following recommendations - Developers of AI agents and AI IDEs are advised to apply the principle of least privilege to LLM tools, minimize prompt injection vectors, harden the system prompt, use sandboxing to run commands, perform security testing for path traversal, information leakage, and command injection. The disclosure coincides with the discovery of several vulnerabilities in AI coding tools that could have a wide range of impacts - As agentic AI tools are becoming increasingly popular in enterprise environments, these findings demonstrate how AI tools expand the attack surface of development machines, often by leveraging an LLM's inability to distinguish between instructions provided by a user to complete a task and content that it may ingest from an external source, which, in turn, can contain an embedded malicious prompt. "Any repository using AI for issue triage, PR labeling, code suggestions, or automated replies is at risk of prompt injection, command injection, secret exfiltration, repository compromise and upstream supply chain compromise," Aikido researcher Rein Daelman said. Marzouk also said the discoveries emphasized the importance of "Secure for AI," which is a new paradigm that has been coined by the researcher to tackle security challenges introduced by AI features, thereby ensuring that products are not only secure by default and secure by design, but are also conceived keeping in mind how AI components can be abused over time. "This is another example of why the 'Secure for AI' principle is needed," Marzouk said. "Connecting AI agents to existing applications (in my case IDE, in their case GitHub Actions) creates new emerging risks."
Daily Brief Summary
Researchers have identified over 30 vulnerabilities in AI-driven Integrated Development Environments (IDEs), potentially leading to data theft and remote code execution (RCE) attacks.
The vulnerabilities, named IDEsaster, impact popular IDEs and extensions such as GitHub Copilot, Cursor, and Zed.dev, with 24 issues assigned CVE identifiers.
The flaws exploit prompt injection primitives combined with legitimate IDE features, enabling data exfiltration and RCE through AI agents.
Attack vectors include context hijacking and tool poisoning, which can be triggered by user-added references or malicious inputs parsed by AI models.
Recommendations include applying least privilege principles, minimizing injection vectors, and employing sandboxing to mitigate risks associated with AI agents.
The findings highlight the expanded attack surface introduced by AI tools in development environments, posing risks like prompt injection and supply chain compromise.
The research underscores the need for a "Secure for AI" approach to address security challenges posed by AI components in software development.