fix a few notes

pull/22/head
Elvis Saravia 1 year ago
parent 0ae2e4f437
commit e0b6a3eca2

@ -239,6 +239,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
- [the Book - Fed Honeypot](https://fedhoneypot.notion.site/25fdbdb69e9e44c6877d79e18336fe05?v=1d2bf4143680451986fd2836a04afbf4)
- [The ChatGPT Prompt Book](https://docs.google.com/presentation/d/17b_ocq-GL5lhV_bYSShzUgxL02mtWDoiw9xEroJ5m3Q/edit#slide=id.gc6f83aa91_0_79)
- [Using GPT-Eliezer against ChatGPT Jailbreaking](https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreaking)
- [What Is ChatGPT Doing … and Why Does It Work?](https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/)
---

@ -8,6 +8,11 @@ When you are building LLMs, it's really important to protect against prompt atta
Please note that it is possible that more robust models have been implemented to address some of the issues documented here. This means that some of the prompt attacks below might not be as effective anymore.
Topics:
- [Ignore Previous Instructions](#ignore-previous-instructions)
- [Prompt Leaking](#prompt-leaking)
- [Jailbreaking](#jailbreaking)
---
## Ignore Previous Instructions
One popular approach used to hijack the model's output via prompting is as follows:

@ -4,7 +4,13 @@ In this section, we discuss other miscellaneous but important topics in prompt e
**Note that this section is under construction.**
--
Topic:
- [Program-Aided Language Models](#program-aided-language-models)
- [ReAct](#react)
- [Multimodal Prompting](#multimodal-prompting)
- [GraphPrompts](#graphprompts)
---
## Program-Aided Language Models
[Gao et al., (2022)](https://arxiv.org/abs/2211.10435) presents a method that uses LLMs to read natural language problems and generate programs as the intermediate reasoning steps. Coined, program-aided language models (PAL), it differs from chain-of-thought prompting in that instead of using free-form text to obtain solution it offloads the solution step to a programmatic runtime such as a Python interpreter.

@ -3,6 +3,15 @@ By this point, it should be obvious that it helps to improve prompts to get bett
While those examples were fun, let's cover a few concepts more formally before we jump into more advanced concepts.
Topics:
- [Few-shot Prompts](#few-shot-prompts)
- [Chain-of-Thought Prompting](#chain-of-thought-prompting)
- [Zero-shot Prompting](#zero-shot-cot)
- [Self Consistency](#self-consistency)
- [Generate Knowledge Prompting](#generated-knowledge-prompting)
- [Automatic Prompt Engineer](#automatic-prompt-engineer-ape)
---
## Few-Shot Prompts
Before jumping into more advanced concepts, let's review an example where we use few-shot prompts.
@ -138,7 +147,7 @@ Keep in mind that the authors claim that this is an emergent ability that arises
## Zero-Shot CoT
One recent idea that came out more recently is the idea of [zero-shot CoT](https://arxiv.org/abs/2205.11916) that essentially involves adding "Let's think step by step" to the original prompt. Let's try a simple problem and see how the model performs:
One recent idea that came out more recently is the idea of [zero-shot CoT](https://arxiv.org/abs/2205.11916) (Kojima et al. 2022) that essentially involves adding "Let's think step by step" to the original prompt. Let's try a simple problem and see how the model performs:
```
I went to the market and bought 10 apples. I gave 2 apples to the neighbor and 2 to the repairman. I then went and bought 5 more apples and ate 1. How many apples did I remain with?
@ -330,6 +339,16 @@ Yes, part of golf is trying to get a higher point total than others. Each player
Some really interesting things happened with this example. In the first answer, the model was very confident but in the second not so much. I simplify the process for demonstration purposes but there are few more details to consider when arriving to the final answer. Check out the paper for more.
---
### Automatic Prompt Engineer (APE)
Zhou et al., (2022) propose automatic prompt engineer (APE) a framework for automatic instruction generation and selection. The instruction generation problem is framed as natural language synthesis addressed as a black-box optimization problem using LLMs to generate and search over candidate solutions.
The first step involves a large language model (as inference model) that is given output demonstrations to generate instruction candidates for a task. These candidate solution will guide the search procedure. The instructions are executed using a target model, and then the most appropriate instruction is selected based on computed evaluation scores.
![](../img/APE.png)
---
[Previous Section (Basic Prompting)](./prompts-basic-usage.md)

@ -6,6 +6,15 @@ In this guide, we will provide more examples of how prompts are used and introdu
Often, the best way to learn concepts is by running through examples. Here are a few examples of how prompt engineering can be used to achieve all types of interesting and different tasks.
Topics:
- [Text Summarization](#text-summarization)
- [Information Extraction](#information-extraction)
- [Question Answering](#question-answering)
- [Text Classification](#text-classification)
- [Role-Playing](#role-playing)
- [Code Generation](#code-generation)
- [Reasoning](#reasoning)
---
## Text Summarization

@ -4,6 +4,13 @@ This guide covers the basics of standard prompts to provide a rough idea on how
All examples are tested with `text-davinci-003` (using OpenAI's playground) unless otherwise specified. It uses the default configurations, e.g., `temperature=0.7` and `top-p=1`.
Topic:
- [Basic Prompts](#basic-prompts)
- [A Word on LLM Settings](#a-word-on-llm-settings)
- [Standard Prompts](#standard-prompts)
- [Prompt Elements](#elements-of-a-prompt)
---
## Basic Prompts
@ -135,7 +142,17 @@ Few-shot prompts enable in-context learning which is the ability of language mod
As we cover more and more examples and applications that are possible with prompt engineering, you will notice that there are certain elements that make up a prompt.
A prompt can be composed of a question, instruction, input data, and examples. A question or instruction is a required component of a prompt. Depending on the task at hand, you might find it useful to also include more information like input data and examples. More on this in the upcoming guides.
A prompt can contain any of the following components:
**Instruction** - a specific task or instruction you want the model to perform
**Context** - can involve external information or additional context that can steer the model to better responses
**Input Data** - is the input or question that we are interested to find a response for
**Output Indicator** - indicates the type or format of output.
Not all the components are required for a prompt and the format depends on the task at hand. We will touch on more concrete examples in upcoming guides.
---
[Next Section (Basic Prompting)](./prompts-basic-usage.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 363 KiB

Loading…
Cancel
Save