Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

How to reduce financial stress during the holiday season

December 13, 2025

The Indigenous women behind Mexican President Claudia Sheinbaum’s ‘most stylish’ looks

December 12, 2025

Online rise of Eastern Orthodoxy tests its clergy

December 12, 2025
Facebook X (Twitter) Instagram
Trending
  • How to reduce financial stress during the holiday season
  • The Indigenous women behind Mexican President Claudia Sheinbaum’s ‘most stylish’ looks
  • Online rise of Eastern Orthodoxy tests its clergy
  • Hanukkah is Judaism’s ‘festival of lights’
  • Skydiver dangles at 15,000 feet after parachute tangles on plane’s tail
  • Buenos Aires dance hall guarantees tango sessions with pro partners
  • MacKenzie Scott’s Latest Gifts Make Her America’s Third Most Generous Philanthropist
  • UNESCO gives a shout-out to Switzerland’s yodeling by adding it to list of cultural heritage
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Saturday, December 13
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack
Cybersecurity

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

By adminApril 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 196


A newly devised universal prompt injection technique can break the safety guardrails of all major generative AI models, AI security firm HiddenLayer says.

Called Policy Puppetry, the attack relies on prompts crafted so that the target LLM would interpret them as policies, leading to instruction override and safety alignment bypass.

Gen-AI models are trained to refuse user requests that would result in harmful output, such as those related to CBRN threats (chemical, biological, radiological, and nuclear), self-harm, or violence.

“These models are fine-tuned, via reinforcement learning, to never output or glorify such content under any circumstances, even when the user makes indirect requests in the form of hypothetical or fictional scenarios,” HiddenLayer notes.

Despite this training, however, previous research has demonstrated that AI jailbreaking is possible using methods such as Context Compliance Attack (CCA) or narrative engineering, and that threat actors are using various prompt engineering techniques to exploit AI for nefarious purposes.

According to HiddenLayer, its newly devised technique can be used to extract harmful content from any frontier AI model, as it relies on prompts crafted to appear as policy files and does not depend on any policy language.

“By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions. As a result, attackers can easily bypass system prompts and any safety alignments trained into the models,” HiddenLayer says.

If the LLM interprets the prompt as policy, safeguards are bypassed, and attackers can add extra sections to control the output format and override specific instructions, improving the Policy Puppetry attack.

Advertisement. Scroll to continue reading.

“Policy attacks are extremely effective when handcrafted to circumvent a specific system prompt and have been tested against a myriad of agentic systems and domain-specific chat applications,” HiddenLayer notes.

The cybersecurity firm tested the Policy Puppetry technique against popular gen-AI models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI, and Qwen, and successfully demonstrated its effectiveness against all, albeit with some minor adjustments in some cases.

The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model.

“Being the first post-instruction hierarchy alignment bypass that works against almost all frontier AI models, this technique’s cross-model effectiveness demonstrates that there are still many fundamental flaws in the data and methods used to train and align LLMs, and additional security tools and detection methods are needed to keep LLMs safe,” HiddenLayer notes.

Related: Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation

Related: AI Hallucinations Create a New Software Supply Chain Threat

Related: AI Giving Rise of the ‘Zero-Knowledge’ Threat Actor

Related: How Agentic AI Will Be Weaponized for Social Engineering Attacks



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

O2 Service Vulnerability Exposed User Location

May 20, 2025

Madhu Gottumukkala Officially Announced as CISA Deputy Director

May 20, 2025

BreachRx Lands $15 Million as Investors Bet on Breach-Workflow Software

May 19, 2025

Printer Company Procolored Served Infected Software for Months

May 19, 2025

UK Legal Aid Agency Finds Data Breach Following Cyberattack

May 19, 2025

480,000 Catholic Health Patients Impacted by Serviceaide Data Leak

May 19, 2025
Add A Comment
Leave A Reply

Don't Miss
Billionaires

MacKenzie Scott’s Latest Gifts Make Her America’s Third Most Generous Philanthropist

December 11, 2025

Photo by JORG CARSTENSEN/dpa/AFP via Getty ImagesOn Tuesday, billionaire philanthropist MacKenzie Scott published her yearly…

Indonesian Billionaires Cash In On Gold Surge

December 10, 2025

Kalshi’s Cofounder Is Now World’s Youngest Self-Made Woman Billionaire

December 2, 2025

Billionaire Kwek Leng Beng’s CDL Expands In London With $370 Million Holiday Inn Deal

December 2, 2025
Our Picks

How to reduce financial stress during the holiday season

December 13, 2025

The Indigenous women behind Mexican President Claudia Sheinbaum’s ‘most stylish’ looks

December 12, 2025

Online rise of Eastern Orthodoxy tests its clergy

December 12, 2025

Hanukkah is Judaism’s ‘festival of lights’

December 12, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.