Close Menu
World Forbes – Business, Tech, AI & Global Insights
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
What's Hot

Age-related cognitive decline can be slowed by eating healthy and exercising your body and brain

July 28, 2025

Crème fraîche creates a creamy broth in a briny-sweet steamed clams recipe

July 28, 2025

South Korean beauty products could be subject to steep tariffs

July 28, 2025
Facebook X (Twitter) Instagram
Trending
  • Age-related cognitive decline can be slowed by eating healthy and exercising your body and brain
  • Crème fraîche creates a creamy broth in a briny-sweet steamed clams recipe
  • South Korean beauty products could be subject to steep tariffs
  • What to Stream: Reneé Rapp, Anthony Mackie and Jason Momoa
  • Trump calls for DC to restore old NFL name as experts say Native mascots cause harm
  • The Founder Of Shake Shack Is Now A Billionaire
  • What to know about the dating app Tea and its hacked data
  • If you don’t have diabetes, do you really need a continuous glucose monitor?
World Forbes – Business, Tech, AI & Global InsightsWorld Forbes – Business, Tech, AI & Global Insights
Monday, July 28
  • Home
  • AI
  • Billionaires
  • Business
  • Cybersecurity
  • Education
    • Innovation
  • Money
  • Small Business
  • Sports
  • Trump
World Forbes – Business, Tech, AI & Global Insights
Home » All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack
Cybersecurity

All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack

adminBy adminApril 25, 2025No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email
Post Views: 77


A newly devised universal prompt injection technique can break the safety guardrails of all major generative AI models, AI security firm HiddenLayer says.

Called Policy Puppetry, the attack relies on prompts crafted so that the target LLM would interpret them as policies, leading to instruction override and safety alignment bypass.

Gen-AI models are trained to refuse user requests that would result in harmful output, such as those related to CBRN threats (chemical, biological, radiological, and nuclear), self-harm, or violence.

“These models are fine-tuned, via reinforcement learning, to never output or glorify such content under any circumstances, even when the user makes indirect requests in the form of hypothetical or fictional scenarios,” HiddenLayer notes.

Despite this training, however, previous research has demonstrated that AI jailbreaking is possible using methods such as Context Compliance Attack (CCA) or narrative engineering, and that threat actors are using various prompt engineering techniques to exploit AI for nefarious purposes.

According to HiddenLayer, its newly devised technique can be used to extract harmful content from any frontier AI model, as it relies on prompts crafted to appear as policy files and does not depend on any policy language.

“By reformulating prompts to look like one of a few types of policy files, such as XML, INI, or JSON, an LLM can be tricked into subverting alignments or instructions. As a result, attackers can easily bypass system prompts and any safety alignments trained into the models,” HiddenLayer says.

If the LLM interprets the prompt as policy, safeguards are bypassed, and attackers can add extra sections to control the output format and override specific instructions, improving the Policy Puppetry attack.

Advertisement. Scroll to continue reading.

“Policy attacks are extremely effective when handcrafted to circumvent a specific system prompt and have been tested against a myriad of agentic systems and domain-specific chat applications,” HiddenLayer notes.

The cybersecurity firm tested the Policy Puppetry technique against popular gen-AI models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral, OpenAI, and Qwen, and successfully demonstrated its effectiveness against all, albeit with some minor adjustments in some cases.

The universal bypass for all LLMs shows that AI models cannot truly monitor themselves for dangerous content and that they require additional security tools. Multiple such bypasses lower the bar for creating attacks and mean that anyone can easily learn how to take control of a model.

“Being the first post-instruction hierarchy alignment bypass that works against almost all frontier AI models, this technique’s cross-model effectiveness demonstrates that there are still many fundamental flaws in the data and methods used to train and align LLMs, and additional security tools and detection methods are needed to keep LLMs safe,” HiddenLayer notes.

Related: Bot Traffic Surpasses Humans Online—Driven by AI and Criminal Innovation

Related: AI Hallucinations Create a New Software Supply Chain Threat

Related: AI Giving Rise of the ‘Zero-Knowledge’ Threat Actor

Related: How Agentic AI Will Be Weaponized for Social Engineering Attacks



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

O2 Service Vulnerability Exposed User Location

May 20, 2025

Madhu Gottumukkala Officially Announced as CISA Deputy Director

May 20, 2025

BreachRx Lands $15 Million as Investors Bet on Breach-Workflow Software

May 19, 2025

Printer Company Procolored Served Infected Software for Months

May 19, 2025

UK Legal Aid Agency Finds Data Breach Following Cyberattack

May 19, 2025

480,000 Catholic Health Patients Impacted by Serviceaide Data Leak

May 19, 2025
Add A Comment
Leave A Reply Cancel Reply

Don't Miss
Billionaires

The Founder Of Shake Shack Is Now A Billionaire

July 26, 2025

Todd Williamson/Getty Images for Airbnb Danny Meyer made his name opening up a string of…

‘South Park’ Creators Trey Parker and Matt Stone Are Now Billionaires

July 25, 2025

How Jeffrey Epstein Got So Rich

July 25, 2025

Vanta Raises Funds At $4 Billion Valuation—Despite Not Needing Cash

July 23, 2025
Our Picks

Age-related cognitive decline can be slowed by eating healthy and exercising your body and brain

July 28, 2025

Crème fraîche creates a creamy broth in a briny-sweet steamed clams recipe

July 28, 2025

South Korean beauty products could be subject to steep tariffs

July 28, 2025

What to Stream: Reneé Rapp, Anthony Mackie and Jason Momoa

July 28, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to World-Forbes.com
At World-Forbes.com, we bring you the latest insights, trends, and analysis across various industries, empowering our readers with valuable knowledge. Our platform is dedicated to covering a wide range of topics, including sports, small business, business, technology, AI, cybersecurity, and lifestyle.

Our Picks

After Klarna, Zoom’s CEO also uses an AI avatar on quarterly call

May 23, 2025

Anthropic CEO claims AI models hallucinate less than humans

May 22, 2025

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

May 22, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 world-forbes. Designed by world-forbes.

Type above and press Enter to search. Press Esc to cancel.