more notes

pull/30/head
Elvis Saravia 1 year ago
parent 38d34ffff9
commit b4a051b01d

@ -171,6 +171,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
- [Prompts.ai](https://github.com/sevazhidkov/prompts-ai)
- [Promptly](https://trypromptly.com/)
- [PromptSource](https://github.com/bigscience-workshop/promptsource)
- [Promptist](https://promptist.herokuapp.com/)
- [Scale SpellBook](https://scale.com/spellbook)
- [sharegpt](https://sharegpt.com)
- [ThoughtSource](https://github.com/OpenBioLink/ThoughtSource)
@ -199,6 +200,7 @@ The following are the latest papers (sorted by release date) on prompt engineeri
- [A Complete Introduction to Prompt Engineering for Large Language Models](https://www.mihaileric.com/posts/a-complete-introduction-to-prompt-engineering)
- [A Generic Framework for ChatGPT Prompt Engineering](https://medium.com/@thorbjoern.heise/a-generic-framework-for-chatgpt-prompt-engineering-7097f6513a0b)
- [AI Content Generation](https://www.jonstokes.com/p/ai-content-generation-part-1-machine)
- [AI's rise generates new job title: Prompt engineer](https://www.axios.com/2023/02/22/chatgpt-prompt-engineers-ai-job)
- [Awesome ChatGPT Prompts](https://github.com/f/awesome-chatgpt-prompts)
- [Best 100+ Stable Diffusion Prompts](https://mpost.io/best-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts)
- [Best practices for prompt engineering with OpenAI API](https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api)

@ -5,6 +5,7 @@ While those examples were fun, let's cover a few concepts more formally before w
Topics:
- [Zero-shot Prompts]()
- [Few-shot Prompts](#few-shot-prompts)
- [Chain-of-Thought Prompting](#chain-of-thought-prompting)
- [Zero-shot CoT](#zero-shot-cot)
@ -12,6 +13,25 @@ Topics:
- [Generate Knowledge Prompting](#generated-knowledge-prompting)
- [Automatic Prompt Engineer](#automatic-prompt-engineer-ape)
---
## Zero-Shot Prompts
LLMs today trained on large amounts of data and tuned to follow instructions, are capable of performing tasks zero-shot. We actually tried a few zero-shot examples in the previous section. Here is one of the examples we used:
*Prompt:*
```
Classify the text into neutral, negative or positive.
Text: I think the vacation is okay.
Sentiment:
```
*Output:*
```
Neutral
```
Note that in the prompt above we didn't provide the model any examples -- that's the zero-shot capabilities at work. When zero-shot doesn't work, it's recommended to provide demonstrations or examples in the prompt. Below we discuss the approach known as few-shot prompting.
---
## Few-Shot Prompts
Before jumping into more advanced concepts, let's review an example where we use few-shot prompts.
@ -100,7 +120,9 @@ What a horrible show! --
Negative
```
There is no consistency in the format above but that still affect the model from predicting the correct label. We have to conduct more thorough analysis to confirm if this holds true for different and more complex tasks.
There is no consistency in the format above but the model still predicted the correct label. We have to conduct more thorough analysis to confirm if this holds true for different and more complex tasks, including different variations of prompts.
Overall, it seems that providing examples is useful in some places. When zero-shot prompting and few-shot prompting are not sufficient, it might mean that the whatever was learned by the model isn't enough to do well at the task. From here it is recommended to start thinking about fine-tuning your own models.
---

@ -9,7 +9,7 @@ Topic:
- [A Word on LLM Settings](#a-word-on-llm-settings)
- [Standard Prompts](#standard-prompts)
- [Prompt Elements](#elements-of-a-prompt)
- [General Tips for Designing Prompts](#general-tips-for-designing-prompts)
---
@ -50,17 +50,10 @@ The sky is
so beautiful today.
```
Is that better? Well, we told the model to complete the sentence so the result looks a lot better as it follows exactly what we told it to do ("complete the sentence"). This approach of instructing the model to perform a task is what's referred to as **prompt engineering**.
Is that better? Well, we told the model to complete the sentence so the result looks a lot better as it follows exactly what we told it to do ("complete the sentence"). This approach of designing optimal prompts to instruct the model to perform a task is what's referred to as **prompt engineering**.
The example above is a basic illustration of what's possible with LLMs today. Today's LLMs are able to perform all kinds of advanced tasks that range from text summarization to mathematical reasoning to code generation.
Here are few more tips to keep in mind while you do prompt engineering:
- You can try other simple tasks by using simple commands to instruct the model like "Write", "Classify", "Summarize", "Translate", "Order", etc.
- Keep in mind that you also need to experiment a lot so see what works best. Trying different instructions with different keywords, context, and data and see what works best for your particular use case and task. Usually, the more specific and relevant the context is to the task you are trying to perform, the better. We will touch on the importance of sampling and adding more context in the upcoming guides.
We will cover more of these capabilities in this guide but also cover other areas of interest such as advanced prompting techniques and research topics around prompt engineering.
---
## A Word on LLM Settings
@ -157,5 +150,70 @@ A prompt can contain any of the following components:
Not all the components are required for a prompt and the format depends on the task at hand. We will touch on more concrete examples in upcoming guides.
---
## General Tips for Designing Prompts
Here are few some tips to keep in mind while you are designing your prompts:
### The Instruction
You can design effective prompts for various simple tasks by using commands to instruct the model what you want to achieve such as "Write", "Classify", "Summarize", "Translate", "Order", etc.
Keep in mind that you also need to experiment a lot so see what works best. Trying different instructions with different keywords, context, and data and see what works best for your particular use case and task. Usually, the more specific and relevant the context is to the task you are trying to perform, the better. We will touch on the importance of sampling and adding more context in the upcoming guides.
Others recommend that instructions are placed at the beginning of the prompt. It's also recommended that some clear separator like "###" is used to separate the instruction and context.
For instance:
*Prompt:*
```
### Instruction ###
Translate the text below to Spanish:
Text: "hello!"
```
*Output:*
```
¡Hola!
```
### Specificity
Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
As an example, let's try a simple prompt to extract specific information from a piece of text.
*Prompt:*
```
Extract the name of places in the following text.
Desired format:
Place: <comma_separated_list_of_company_names>
Input: "Although these developments are encouraging to researchers, much is still a mystery. “We often have a black box between the brain and the effect we see in the periphery,” says Henrique Veiga-Fernandes, a neuroimmunologist at the Champalimaud Centre for the Unknown in Lisbon. “If we want to use it in the therapeutic context, we actually need to understand the mechanism.""
```
*Output:*
```
Place: Champalimaud Centre for the Unknown, Lisbon
```
Input text is obtained from [this Nature article](https://www.nature.com/articles/d41586-023-00509-z).
### Avoid Impreciseness
Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
For example, you might be interested in generating a list of products to buy to prepare a BBQ. You might try something like:
```
Explain the concept prompt engineering. Keep the explanation short, only a few sentences, and don't be too descriptive.
```
It's not clear from the prompt above how many sentences to use and what style. You might still somewhat get good responses with the above prompts but the better prompt would be one that is very specific, concise, and to the point. Something like:
```
Use 2-3 sentences to explain the concept of prompt engineering to a high school student.
```
---
[Next Section (Basic Prompting)](./prompts-basic-usage.md)
Loading…
Cancel
Save