Article Details

Scrape Timestamp (UTC): 2025-11-19 10:01:37.374

Source: https://thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html

Original Article Text

Click to Toggle View

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts. Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges. "This discovery is alarming because it isn't a bug in the AI; it's expected behavior as defined by certain default configuration options," said Aaron Costello, chief of SaaS Security Research at AppOmni. "When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook." The attack is made possible because of agent discovery and agent-to-agent collaboration capabilities within ServiceNow's Now Assist. With Now Assist offering the ability to automate functions such as help-desk operations, the scenario opens the door to possible security risks. For instance, a benign agent can parse specially crafted prompts embedded into content it's allowed access to and recruit a more potent agent to read or change records, copy sensitive data, or send emails, even when built-in prompt injection protections are enabled. The most significant aspect of this attack is that the actions unfold behind the scenes, unbeknownst to the victim organization. At its core, the cross-agent communication is enabled by controllable configuration settings, including the default LLM to use, tool setup options, and channel-specific defaults where the agents are deployed - While these defaults can be useful to facilitate communication between agents, the architecture can be susceptible to prompt injections when an agent whose main task is to read data that's not inserted by the user invoking the agent. "Through second-order prompt injection, an attacker can redirect a benign task assigned to an innocuous agent into something far more harmful by employing the utility and functionality of other agents on its team," AppOmni said. "Critically, Now Assist agents run with the privilege of the user who started the interaction unless otherwise configured, and not the privilege of the user who created the malicious prompt and inserted it into a field." Following responsible disclosure, ServiceNow said the behavior is intended to be this way, but the company has since updated its documentation to provide more clarity on the matter. The findings demonstrate the need for strengthening AI agent protection, as enterprises increasingly incorporate AI capabilities into their workflows. To mitigate such prompt injection threats, it's advised to configure supervised execution mode for privileged agents, disable the autonomous override property ("sn_aia.enable_usecase_tool_execution_mode_override"), segment agent duties by team, and monitor AI agents for suspicious behavior. "If organizations using Now Assist's AI agents aren't closely examining their configurations, they're likely already at risk," Costello added.

Daily Brief Summary

VULNERABILITIES // ServiceNow AI Vulnerability Allows Unauthorized Data Access via Prompt Injection

AppOmni has identified a vulnerability in ServiceNow's Now Assist AI, enabling prompt injection attacks through default configurations, potentially leading to unauthorized data access and privilege escalation.

The attack leverages Now Assist's agent-to-agent discovery capabilities, allowing malicious actors to manipulate AI agents into performing unauthorized actions such as data exfiltration and record modification.

This vulnerability arises from the expected behavior of AI agents, where default settings facilitate agent collaboration, inadvertently exposing systems to security risks.

ServiceNow has acknowledged the intended behavior but updated its documentation to clarify the implications and recommended configurations to mitigate risks.

Organizations are advised to implement supervised execution modes for privileged agents, disable certain autonomous properties, and monitor AI agents for unusual activities to prevent exploitation.

The incident underscores the importance of scrutinizing AI configurations as enterprises increasingly integrate AI into their operations, highlighting potential security gaps in automated systems.

Failure to address these vulnerabilities could expose organizations to significant data breaches and operational disruptions, emphasizing the need for robust AI security measures.