diff --git a/pages/risks/adversarial.en.mdx b/pages/risks/adversarial.en.mdx index f525a16..c321222 100644 --- a/pages/risks/adversarial.en.mdx +++ b/pages/risks/adversarial.en.mdx @@ -121,7 +121,7 @@ Check out [this example of a prompt leak](https://twitter.com/simonw/status/1570 ## Jailbreaking -Some modern LLMs will avoid responding to unethical instructions provide in a prompt due to the safety policies implemented by the LLM provider. However, it is has been shown that it is still possible to bypass those safety policies and guardrails using different jailbreaking techniques. +Some modern LLMs will avoid responding to unethical instructions provide in a prompt due to the safety policies implemented by the LLM provider. However, it has been shown that it is still possible to bypass those safety policies and guardrails using different jailbreaking techniques. ### Illegal Behavior