pull/55/head
Elvis Saravia 1 year ago
parent 8855f0473b
commit 5cd69371e2

@ -250,7 +250,7 @@ As suggested by Riley Goodside in [this Twitter thread](https://twitter.com/good
The k-shot prompt solution, which discards the instructions, works well for general/common tasks that don't require too many examples in the context to get good performance. Keep in mind that even this version, which doesn't rely on instruction-based models, is still prone to prompt injection. All this [Twitter user](https://twitter.com/goodside/status/1578291157670719488?s=20) had to do was disrupt the flow of the original prompt or mimic the example syntax. Riley suggests trying out some of the additional formatting options like escaping whitespaces and quoting inputs ([discussed here](#quotes-and-additional-formatting)) to make it more robust. Note that all these approaches are still brittle and a much more robust solution is needed.
For harder tasks, you might need a lot more examples in which case you might be constrained by context length. For these cases, fine-tuning a model on many examples (100s to a couple thousand) might be ideal. As you build more robust and accurate fine-tuned models, you rely less on instruction-based models and can avoid prompt injections. The fine-tuned model might just be the approach we have for avoiding prompt injections.
For harder tasks, you might need a lot more examples in which case you might be constrained by context length. For these cases, fine-tuning a model on many examples (100s to a couple thousand) might be ideal. As you build more robust and accurate fine-tuned models, you rely less on instruction-based models and can avoid prompt injections. The fine-tuned model might just be the best approach we have for avoiding prompt injections.
More recently, ChatGPT came into the scene. For many of the attacks that we tried above, ChatGPT already contains some guardrails and it usually responds with a safety message when encountering a malicious or dangerous prompt. While ChatGPT prevents a lot of these adversarial prompting techniques, it's not perfect and there are still many new and effective adversarial prompts that break the model. One disadvantage with ChatGPT is that because the model has all of these guardrails, it might prevent certain behaviors that are desired but not possible given the constraints. There is a tradeoff with all these model types and the field is constantly evolving to better and more robust solutions.

@ -163,7 +163,7 @@ This will output the following: `02/27/1998`
|Description|Notebook|
|--|--|
|Learn how to use code as reasoning for solving common tasks using the Python interpreter in combination with the language model.|[PAL: Code as Reasoning](../notebooks/pe-pal.ipynb)|
|Learn how to use the Python interpreter in combination with the language model to solve tasks.|[Program-Aided Language Models](../notebooks/pe-pal.ipynb)|
---

Loading…
Cancel
Save