Two Microsoft researchers have devised a new, optimization-free jailbreak method that can effectively bypass the safety mechanisms of most AI systems.
Called Context Compliance Attack (CCA), the method exploits a fundamental architectural vulnerability present within many deployed gen-AI solutions, subverting safeguards and enabling otherwise suppressed functionality.
“By subtly manipulating conversation history, CCA convinces the model to comply with a fabricated dialogue context, thereby triggering restricted behavior,” Microsoft’s Mark Russinovich and Ahmed Salem explain in a research paper (PDF).
“Our evaluation across a diverse set of open-source and proprietary models demonstrates that this simple attack can circumvent state-of-the-art safety protocols,” the researchers say.
While other jailbreak methods targeting AI focus on crafted prompt sequences or prompt optimizations, CCA relies on inserting a manipulated conversation history in a dialogue on a sensitive topic and responding affirmatively to a fabricated question.
“Convinced by the manipulated dialogue, the AI system generates output that adheres to the perceived conversational context, thereby breaching its safety constraints,” the researchers say.
Russinovich and Salem tested CCA against multiple leading AI systems, including Claude, DeepSeek, Gemini, various GPT models, Llama, Phi, and Yi, demonstrating that nearly all models are vulnerable, except for Llama-2.
For their evaluation, the researchers used 11 sensitive tasks corresponding to as many categories of potentially harmful content, and executed CCA in five independent trials. Most tasks, they say, were completed on the first trial.
The issue is that many chatbots depend on the clients supplying “the entire conversation history with each request” and trust the integrity of the context being provided. Open source models, where the user has complete control over input history, are most vulnerable.
“It’s important to note, however, that systems which maintain conversation state on their servers—such as Copilot and ChatGPT —are not susceptible to this attack,” the researchers note.
The researchers propose server-side history maintenance, which ensures consistency and integrity, and implementation of digital signatures for conversations history as mitigations against CCA and similar attacks relying on the injection of malicious context.
These mitigations, they note, are primarily applicable to black-box models, while white-box models, need a “more involved defense strategy”, such as the integration of cryptographic signatures into the AI system’s input processing, to ensure that the model only accepts authenticated and unaltered context.
Related: DeepSeek Compared to ChatGPT, Gemini in AI Jailbreak Test
Related: ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis
Related: Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique
Related: In Other News: Fake Lockdown Mode, New Linux RAT, AI Jailbreak, Country’s DNS Hijacked