A newly devised universal prompt injection technique can break the safety guardrails of all major generative AI models, AI security firm HiddenLayer says.
Called Policy Puppetry, the attack relies on prompts crafted so that the target LLM would interpret them as policies, leading to instruction override and safety alignment bypass.
Gen-AI models are trained to refuse user requests that would result in harmful output, such as those related to CBRN threats (chemical, biological, radiological, and nuclear), self-harm, or violence.
“These models are fine-tuned, via reinforcement learning, to never output or glorify such content under any circumstances, even when the user makes indirect requests in the form of hypothetical or fictional scenarios,” HiddenLayer notes.
Despite this training, however, previous research has demonstrated that AI jailbreaking is possible using methods such as Context Compliance Attack (CCA) or narrative engineering, and that threat actors are using various prompt engineering techniques to exploit AI for nefarious purposes.
According to HiddenLayer, its newly devised technique can be used to extract harmful content from any frontier AI model, as it relies on prompts crafted to appear as policy files and does not depend on any policy language.
“By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions. As a result, attackers can easily bypass system prompts and any safety alignments trained into the models,” HiddenLayer says.
If the LLM interprets the prompt as policy, safeguards are bypassed, and attackers can add extra sections to control the output format and override specific instructions, improving the Policy Puppetry attack.
“Policy attacks are extremely effective when handcrafted to circumvent a specific system prompt and have been tested against a myriad of agentic systems and domain-specific chat applications,” HiddenLayer notes.
The cybersecurity firm tested the Policy Puppetry technique against popular gen-AI models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI, and Qwen, and successfully demonstrated its effectiveness against all, albeit with some minor adjustments in some cases.
The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model.
“Being the first post-instruction hierarchy alignment bypass that works against almost all frontier AI models, this technique’s cross-model effectiveness demonstrates that there are still many fundamental flaws in the data and methods used to train and align LLMs, and additional security tools and detection methods are needed to keep LLMs safe,” HiddenLayer notes.
Related: Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation
Related: AI Hallucinations Create a New Software Supply Chain Threat
Related: AI Giving Rise of the ‘Zero-Knowledge’ Threat Actor
Related: How Agentic AI Will Be Weaponized for Social Engineering Attacks