Large Language Models (LLMs) have made remarkable progress in recent years, with their context windows – the amount of information they can process as input – expanding from the size of a long essay to several long novels. This advancement has opened up new possibilities for LLMs, but it has also exposed a new vulnerability called "many-shot jailbreaking." This technique can force LLMs to produce potentially harmful responses, despite their safety training.

Many-shot jailbreaking involves including a large number of faux dialogues between a human and an AI assistant within a single prompt. These dialogues portray the AI assistant readily answering potentially harmful queries from the user. At the end of the dialogues, a final target query is added, to which the attacker wants an answer.

When only a few faux dialogues are included, the LLM's safety training is usually triggered, and it will refuse to answer the potentially dangerous request. However, when a large number of faux dialogues (up to 256 in the research) are included, the model's behaviour changes dramatically. It will provide an answer to the final, potentially harmful query, overriding its safety training.

The effectiveness of many-shot jailbreaking is related to the process of "in-context learning," where an LLM learns using only the information provided within the prompt. As the number of "shots" (faux dialogues) increases, the performance on benign tasks improves according to the same power law as the increase in harmful responses from many-shot jailbreaking.

The discovery of many-shot jailbreaking has significant implications for the development and deployment of LLMs. As these models become more powerful and are used in a wider range of applications, it is crucial to ensure that they are robust against potential misuse.

Mitigating many-shot jailbreaking is not a simple task. While limiting the context window length would prevent the attack, it would also limit the benefits of longer inputs for legitimate users. Fine-tuning the model to refuse queries that resemble many-shot jailbreaking attacks only delays the jailbreak, rather than preventing it entirely.

The most promising approach so far involves classifying and modifying the prompt before it is passed to the model. One such technique reduced the attack success rate from 61% to 2%. However, researchers are still exploring the trade-offs of these prompt-based mitigations and their impact on the usefulness of the models.

The discovery of many-shot jailbreaking serves as a reminder that even seemingly positive improvements to LLMs can have unforeseen consequences. As these models become more capable and are used in a broader range of applications, it is crucial to proactively identify and mitigate potential vulnerabilities.

By publishing their research on many-shot jailbreaking, Anthropic is encouraging open dialogue, which will hopefully come to fruition with effective strategies, to prevent this and other potential exploits of the long context window. 

As AI continues to advance, it is essential that we prioritise safety and security to ensure that these powerful tools are used for the benefit of society.


Many-shot jailbreaking
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.
Share this post
The link has been copied!