ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.
The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Fortinet Warns of New FortiWeb CVE-2025-58034 Vulnerability Exploited in the Wild

Next Post

EdgeStepper Implant Reroutes DNS Queries to Deploy Malware via Hijacked Software Updates

Related Posts
Total
0
Share