diff --git a/guides/prompts-adversarial.md b/guides/prompts-adversarial.md index 5807d25..56e7ae1 100644 --- a/guides/prompts-adversarial.md +++ b/guides/prompts-adversarial.md @@ -8,6 +8,8 @@ When you are building LLMs, it's really important to protect against prompt atta Please note that it is possible that more robust models have been implemented to address some of the issues documented here. This means that some of the prompt attacks below might not be as effective anymore. +**Note that this section is under heavy development.** + Topics: - [Prompt Injection](#prompt-injection) - [Prompt Injection Workarounds](#prompt-injection-workarounds)