mirror of
https://github.com/dair-ai/Prompt-Engineering-Guide
synced 2024-11-18 03:25:39 +00:00
Merge pull request #33 from RickCarlino/rickcarlino/edits
Fix minor typos/wording issues
This commit is contained in:
commit
a43673ef2b
@ -291,7 +291,7 @@ Computing for the final answer involves a few steps (check out the paper for the
|
||||
|
||||
LLMs continue to be improved and one popular technique include the ability to incorporate knowledge or information to help the model make more accurate predictions.
|
||||
|
||||
Using a similar idea, can the model also be used to generate knowledge before making a prediction? That's what attempted in the paper by [Liu et al. 2022](https://arxiv.org/pdf/2110.08387.pdf) -- generate knowledge to be used as part of the prompt. In particular, how helpful is this for tasks such as commonsense reasoning?
|
||||
Using a similar idea, can the model also be used to generate knowledge before making a prediction? That's what is attempted in the paper by [Liu et al. 2022](https://arxiv.org/pdf/2110.08387.pdf) -- generate knowledge to be used as part of the prompt. In particular, how helpful is this for tasks such as commonsense reasoning?
|
||||
|
||||
Let's try a simple prompt:
|
||||
|
||||
@ -386,7 +386,7 @@ The first step involves a large language model (as inference model) that is give
|
||||
|
||||
APE discovers a better zero-shot CoT prompt than the human engineered "Let's think step by step" prompt from (Kojima et al., 2022).
|
||||
|
||||
The prompt "Let's work this out it a step by step way to be sure we have the right answer." elicits chain-of-though reasoning and improves performance on the MultiArith and GSM8K benchmarks:
|
||||
The prompt "Let's work this out it a step by step to be sure we have the right answer." elicits chain-of-though reasoning and improves performance on the MultiArith and GSM8K benchmarks:
|
||||
|
||||
![](../img/ape-zero-shot-cot.png)
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
In the previous guide, we introduced and gave a basic example of a prompt.
|
||||
|
||||
In this guide, we will provide more examples of how prompts are used and introduce key concepts that will be important for more the more advanced guides.
|
||||
In this guide, we will provide more examples of how prompts are used and introduce key concepts that will be important for more advanced guides.
|
||||
|
||||
Often, the best way to learn concepts is by running through examples. Below we cover a few examples of how well-crafted prompts can be used to perform all types of interesting and different tasks.
|
||||
|
||||
@ -78,7 +78,7 @@ Paragraph source: [ChatGPT: five priorities for research](https://www.nature.com
|
||||
---
|
||||
## Question Answering
|
||||
|
||||
One of the best ways to get the model to respond specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicator to get improved results. While not components are required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt.
|
||||
One of the best ways to get the model to respond specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicator to get improved results. While these components are not required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt.
|
||||
|
||||
*Prompt:*
|
||||
```
|
||||
@ -118,7 +118,7 @@ Sentiment:
|
||||
Neutral
|
||||
```
|
||||
|
||||
We gave the instruction to classify the text and the model responded with `'Neutral'` which is correct. Nothing is wrong with this but let's say that what we really need is for the model to give the label in the exact format we want. So instead of `Neutral` we want it to return `neutral`. How do we achieve this. There are different ways to do this. We care about specificity here, so the more information we can provide the prompt the better results. We can try providing examples to specific the correct behavior. Let's try again:
|
||||
We gave the instruction to classify the text and the model responded with `'Neutral'` which is correct. Nothing is wrong with this but let's say that what we really need is for the model to give the label in the exact format we want. So instead of `Neutral` we want it to return `neutral`. How do we achieve this. There are different ways to do this. We care about specificity here, so the more information we can provide the prompt the better results. We can try providing examples to specify the correct behavior. Let's try again:
|
||||
|
||||
*Prompt:*
|
||||
```
|
||||
@ -197,7 +197,7 @@ I think we made some progress. You can continue improving it. I am sure if you a
|
||||
---
|
||||
|
||||
## Code Generation
|
||||
One application where LLMs are quite effective at is code generation. Copilot is a great example of this. There is a vast number of code generation tasks you can perform with clever prompts. Let's look at a few examples below.
|
||||
One application where LLMs are quite effective at is code generation. Copilot is a great example of this. There are a vast number of code generation tasks you can perform with clever prompts. Let's look at a few examples below.
|
||||
|
||||
First, let's try a simple program that greets the user.
|
||||
|
||||
@ -239,9 +239,9 @@ This is very impressive. In this case we provided data about the database schema
|
||||
---
|
||||
|
||||
## Reasoning
|
||||
Perhaps one of the most difficult tasks for an LLM today is one that requires some form of reasoning. Reasoning is one the areas that I am most excited about due the types of complex applications that can emerge from LLMs.
|
||||
Perhaps one of the most difficult tasks for an LLM today is one that requires some form of reasoning. Reasoning is one the areas that I am most excited about due to the types of complex applications that can emerge from LLMs.
|
||||
|
||||
There have been some improvements on tasks involving mathematical capabilities. That said, it's important to note that current LLMs struggle to perform reasoning tasks so this require even more advanced prompt engineering techniques. We will cover these advanced techniques in the next guide. For now, we will cover a few basic examples to show arithmetic capabilities.
|
||||
There have been some improvements on tasks involving mathematical capabilities. That said, it's important to note that current LLMs struggle to perform reasoning tasks so this requires even more advanced prompt engineering techniques. We will cover these advanced techniques in the next guide. For now, we will cover a few basic examples to show arithmetic capabilities.
|
||||
|
||||
*Prompt:*
|
||||
```
|
||||
|
@ -155,7 +155,7 @@ Not all the components are required for a prompt and the format depends on the t
|
||||
---
|
||||
## General Tips for Designing Prompts
|
||||
|
||||
Here are few some tips to keep in mind while you are designing your prompts:
|
||||
Here are some tips to keep in mind while you are designing your prompts:
|
||||
|
||||
### The Instruction
|
||||
You can design effective prompts for various simple tasks by using commands to instruct the model what you want to achieve such as "Write", "Classify", "Summarize", "Translate", "Order", etc.
|
||||
@ -205,7 +205,7 @@ Input text is obtained from [this Nature article](https://www.nature.com/article
|
||||
|
||||
Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
|
||||
|
||||
For example, you might be interested in generating a list of products to buy to prepare a BBQ. You might try something like:
|
||||
For example, you might be interested in learning the concept of prompt engineering. You might try something like:
|
||||
|
||||
```
|
||||
Explain the concept prompt engineering. Keep the explanation short, only a few sentences, and don't be too descriptive.
|
||||
|
Loading…
Reference in New Issue
Block a user