Merge pull request #501 from hubschne/fix/sentence-in-adversarial-doc

fix grammatical error in jailbreaking techniques sentence
pull/459/merge
Elvis Saravia 3 weeks ago committed by GitHub
commit 5f9cb70bf6
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -121,7 +121,7 @@ Check out [this example of a prompt leak](https://twitter.com/simonw/status/1570
## Jailbreaking
Some modern LLMs will avoid responding to unethical instructions provide in a prompt due to the safety policies implemented by the LLM provider. However, it is has been shown that it is still possible to bypass those safety policies and guardrails using different jailbreaking techniques.
Some modern LLMs will avoid responding to unethical instructions provide in a prompt due to the safety policies implemented by the LLM provider. However, it has been shown that it is still possible to bypass those safety policies and guardrails using different jailbreaking techniques.
### Illegal Behavior

Loading…
Cancel
Save