prompt chaining

pull/350/head
Elvis Saravia 5 months ago
parent 0a033ee0b4
commit e26af18e85

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

@ -33,6 +33,11 @@ While the previous basic examples were fun, in this section we cover more advanc
title="General Knowledge Prompting"
href="/techniques/knowledge"
/>
<Card
icon={<FilesIcon />}
title="Prompt Chaining"
href="/techniques/prompt_chaining"
/>
<Card
icon={<FilesIcon />}
title="Tree of Thoughts Prompting"

@ -4,6 +4,7 @@
"cot": "Prompt cadena de pensament (CoT)",
"consistency": "Autoconsistència",
"knowledge": "Prompt de coneixement generat",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Selbstkonsistenz",
"knowledge": "Generiertes Wissens-Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Prompt cadena de pensamiento (CoT)",
"consistency": "Auto-consistencia",
"knowledge": "Prompt de conocimiento generado",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Prompt Chain-of-Thought",
"consistency": "Self-Consistency",
"knowledge": "Prompt Generate Knowledge",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thoughtプロンプティング",
"consistency": "自己整合性Self-Consistency",
"knowledge": "知識生成プロンプティング",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Chain-of-Thought Prompting",
"consistency": "Self-Consistency",
"knowledge": "Generate Knowledge Prompting",
"prompt_chaining": "Prompt Chaining",
"tot": "Tree of Thoughts",
"rag": "Retrieval Augmented Generation",
"art": "Automatic Reasoning and Tool-use",

@ -4,6 +4,7 @@
"cot": "Düşünce Zinciri İstemleri",
"consistency": "Öz-Tutarlılık",
"knowledge": "Üretilmiş Bilgi İstemleri",
"prompt_chaining": "Prompt Chaining",
"tot": "Düşünce Ağacı",
"rag": "Veri Alımı Artırılmış Üretim",
"art": "Otomatik Akıl Yürütme ve Araç Kullanımı",

@ -4,6 +4,7 @@
"cot": "链式思考CoT提示",
"consistency": "自我一致性",
"knowledge": "生成知识提示",
"prompt_chaining": "Prompt Chaining",
"tot": "思维树ToT",
"rag": "检索增强生成 (RAG)",
"art": "自动推理并使用工具ART",

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,105 @@
# Prompt Chaining
import {Screenshot} from 'components/screenshot'
import PC1 from '../../img/prompt_chaining/prompt-chaining-1.png'
## Introduction to Prompt Chaining
To improve the reliability and performance of LLMs, one of the important prompting engineering techniques is to break tasks into its subtasks. Once those subtasks have been identified, the LLM is prompted with a subtask and then its response is used as input to another prompt. This is what's referred to as prompt chaining where a task is split into subtasks with the idea to create a chain of prompt operations.
Prompt chaining is useful to accomplish complex which an LLM might struggle to address if prompted with a very detailed prompt. In prompt chaining, chain prompts perform transformations or additional processes on the generated responses before reaching a final desired state.
Besides achieving better performance, prompt chaining helps to boost transparency of your LLM application, increases controllability, and reliability. This means that you can debug problems with model responses much easier and analyze and improve performance in the different stages that need improvement.
Prompt chaining is particularly useful when building LLM-powered conversational assistants and improving the personalization and user experience of your applications.
## Use Cases for Prompt Chaining
### Prompt Chaining for Document QA
Prompt chaining can be used in different scenarios that could involve several operations or transformations. For instance, one common use case of LLMs involves answering questions about a large text document. It helps if you design two different prompts where the first prompt is responsible for extracting relevant quotes to answer a question and a second prompt takes as input the quotes and original document to answer a given question. In other words, you will be creating two different prompts to perform the task of answering a question given a document.
The first prompt below extracts the relevant quotes from the document given the question. Note that for simplicity, we have added a placeholder for the document `{{document}}`. To test the prompt you can copy and past an article from Wikipedia such as this page for [prompt engineering](https://en.wikipedia.org/wiki/Prompt_engineering). Due to larger context used for this task, we are using the `gpt-4-1106-preview` model from OpenAI. You can use the prompt with other long-context LLMs like Claude.
Prompt 1:
```
You are a helpful assistant. Your task is to help answer a question given a document. The first step is to extract quotes relevant to the question from the document, delimited by ####. Please output the list of quotes using <quotes></quotes>. Respond with "No relevant quotes found!" if no relevant quotes were found.
####
{{document}}
####
```
Here is a screenshot of the entire prompt including the question which is passed using the `user` role.
<Screenshot src={PC1} alt="Prompt Chaining Part 1" />
Output of Prompt 1:
```
<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>
```
The quotes that were returned in the first prompt can now be used as input to the second prompt below. Note that you can clean up the quotes a bit more, i.e., remove the citations. Those citations could be removed or utilized as part of another prompt in the chain but you ignore this for now. The second prompt then takes the relevant quotes extracted by prompt 1 and prepares a helpful response to the question given the document and those extracted quotes. The second prompt can be the following:
Prompt 2:
```
Given a set of relevant quotes (delimited by <quotes></quotes>) extracted from a document and the original document (delimited by ####), please compose an answer to the question. Ensure that the answer is accurate, has a friendly tone, and sounds helpful.
####
{{document}}
####
<quotes>
- Chain-of-thought (CoT) prompting[27]
- Generated knowledge prompting[37]
- Least-to-most prompting[38]
- Self-consistency decoding[39]
- Complexity-based prompting[41]
- Self-refine[42]
- Tree-of-thought prompting[43]
- Maieutic prompting[45]
- Directional-stimulus prompting[46]
- Textual inversion and embeddings[59]
- Using gradient descent to search for prompts[61][62][63][64]
- Prompt injection[65][66][67]
</quotes>
```
Output of Prompt 2:
```
The prompting techniques mentioned in the document include:
1. Chain-of-thought (CoT) prompting[27]
2. Generated knowledge prompting[37]
3. Least-to-most prompting[38]
4. Self-consistency decoding[39]
5. Complexity-based prompting[41]
6. Self-refine[42]
7. Tree-of-thought prompting[43]
8. Maieutic prompting[45]
9. Directional-stimulus prompting[46]
10. Textual inversion and embeddings[59]
11. Using gradient descent to search for prompts[61][62][63][64]
12. Prompt injection[65][66][67]
Each of these techniques employs unique strategies to enhance or specify the interactions with large language models to produce the desired outcomes.
```
As you can see, simplifying and create prompt chains is a useful prompting approach where the responses need to undergo several operations or transformations. As an exercise, feel free to design a prompt that removes the citations (e.g., [27]) from the response before sending this as a final response to the user of your application.
You can also find more examples of prompt chaining in this [documentation](https://docs.anthropic.com/claude/docs/prompt-chaining) that leverages the Claude LLM. Our example is inspired and adopted from their examples.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.

@ -0,0 +1,3 @@
# Prompt Chaining
This page needs a translation! Feel free to contribute a translation by clicking the `Edit this page` button on the right side.
Loading…
Cancel
Save