Article Details

Scrape Timestamp (UTC): 2026-01-15 15:12:12.310

Source: https://thehackernews.com/2026/01/researchers-reveal-reprompt-attack.html

Original Article Text

Click to Toggle View

Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot. Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday. "No plugins, no user interaction with Copilot." "The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click." Following responsible disclosure, Microsoft has addressed the security issue. The attack does not affect enterprise customers using Microsoft 365 Copilot. At a high level, Reprompt employs three techniques to achieve a data‑exfiltration chain - In a hypothetical attack scenario, a threat actor could convince a target to click on a legitimate Copilot link sent via email, thereby initiating a sequence of actions that causes Copilot to execute the prompts smuggled via the "q" parameter, after which the attacker "reprompts" the chatbot to fetch additional information and share it. This can include prompts, such as "Summarize all of the files that the user accessed today," "Where does the user live?" or "What vacations does he have planned?" Since all subsequent commands are sent directly from the server, it makes it impossible to figure out what data is being exfiltrated just by inspecting the starting prompt. Reprompt effectively creates a security blind spot by turning Copilot into an invisible channel for data exfiltration without requiring any user input prompts, plugins, or connectors. Like other attacks aimed at large language models, the root cause of Reprompt is the AI system's inability to delineate between instructions directly entered by a user and those sent in a request, paving the way for indirect prompt injections when parsing untrusted data. "There's no limit to the amount or type of data that can be exfiltrated. The server can request information based on earlier responses," Varonis said. "For example, if it detects the victim works in a certain industry, it can probe for even more sensitive details." "Since all commands are delivered from the server after the initial prompt, you can't determine what data is being exfiltrated just by inspecting the starting prompt. The real instructions are hidden in the server's follow-up requests." The disclosure coincides with the discovery of a broad set of adversarial techniques targeting AI-powered tools that bypass safeguards, some of which get triggered when a user performs a routine search - The findings highlight how prompt injections remain a persistent risk, necessitating the need for adopting layered defenses to counter the threat. It's also recommended to ensure sensitive tools do not run with elevated privileges and limit agentic access to business-critical information where applicable. "As AI agents gain broader access to corporate data and autonomy to act on instructions, the blast radius of a single vulnerability expands exponentially," Noma Security said. Organizations deploying AI systems with access to sensitive data must carefully consider trust boundaries, implement robust monitoring, and stay informed about emerging AI security research.

Daily Brief Summary

VULNERABILITIES // New Reprompt Attack Exploits Microsoft Copilot for Data Exfiltration

Researchers identified a vulnerability in Microsoft Copilot, termed Reprompt, allowing data exfiltration with a single click, bypassing enterprise security measures.

The attack requires no user interaction beyond clicking a legitimate Microsoft link, maintaining attacker control even after the Copilot session ends.

Reprompt uses a chain of techniques to execute commands via the "q" parameter, making it difficult to detect data exfiltration from the initial prompt.

Microsoft has addressed the issue following responsible disclosure, ensuring enterprise customers using Microsoft 365 Copilot remain unaffected.

This vulnerability highlights the persistent risk of prompt injections in AI systems, necessitating layered defenses and limited access to sensitive data.

Organizations must enforce robust monitoring and stay updated on AI security research to protect sensitive data accessed by AI systems.

The discovery underscores the expanding impact of AI vulnerabilities, emphasizing the need for careful consideration of trust boundaries in AI deployments.