forked from Archives/langchain
parent
243211a5ae
commit
e49fc51492
@ -94,7 +94,7 @@
|
||||
"]\n",
|
||||
"example_prompt = PromptTemplate(input_variables=[\"question\", \"answer\"], template=\"Question: {question}\\n{answer}\")\n",
|
||||
"\n",
|
||||
"prompt = FewShotPrompt(\n",
|
||||
"prompt = FewShotPromptTemplate(\n",
|
||||
" examples=examples, \n",
|
||||
" example_prompt=example_prompt, \n",
|
||||
" suffix=\"Question: {input}\", \n",
|
||||
@ -104,7 +104,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"id": "897d4e08",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -128,7 +128,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 5,
|
||||
"id": "7ab7379f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -298,7 +298,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.7"
|
||||
"version": "3.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -73,7 +73,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 3,
|
||||
"id": "a7bd36bc",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
@ -87,7 +87,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 4,
|
||||
"id": "e1efb008",
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
@ -100,15 +100,26 @@
|
||||
"text/plain": [
|
||||
"['',\n",
|
||||
" '',\n",
|
||||
" 'Question: What is the highest mountain peak in North America?',\n",
|
||||
" '',\n",
|
||||
" 'Question: Is the film \"The Omen\" based on a book?',\n",
|
||||
" 'Thought 1: I need to search \"The Omen\" and find if it is based on a book.',\n",
|
||||
" 'Action 1: Search[\"The Omen\"]',\n",
|
||||
" 'Observation 1: The Omen is a 1976 American supernatural horror film directed by Richard Donner and written by David Seltzer.',\n",
|
||||
" 'Thought 2: The Omen is not based on a book.']"
|
||||
" 'Thought 1: I need to search North America and find the highest mountain peak.',\n",
|
||||
" '',\n",
|
||||
" 'Action 1: Search[North America]',\n",
|
||||
" '',\n",
|
||||
" 'Observation 1: North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere.',\n",
|
||||
" '',\n",
|
||||
" 'Thought 2: I need to look up \"highest mountain peak\".',\n",
|
||||
" '',\n",
|
||||
" 'Action 2: Lookup[highest mountain peak]',\n",
|
||||
" '',\n",
|
||||
" 'Observation 2: (Result 1 / 1) Denali, formerly Mount McKinley, is the highest mountain peak in North America, with a summit elevation of 20,310 feet (6,190 m) above sea level.',\n",
|
||||
" '',\n",
|
||||
" 'Thought 3: Denali is the highest mountain peak in North America, with a summit elevation of 20,310 feet.',\n",
|
||||
" '',\n",
|
||||
" 'Action 3: Finish[20,310 feet]']"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -142,7 +153,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.8.7"
|
||||
"version": "3.7.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -191,7 +191,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.few_shot import FewShotPromptTemplate"
|
||||
"from langchain.prompts import FewShotPromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -276,7 +276,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.example_selector.length_based import LengthBasedExampleSelector"
|
||||
"from langchain.prompts.example_selector import LengthBasedExampleSelector"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -298,7 +298,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"execution_count": 10,
|
||||
"id": "207e55f7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@ -328,7 +328,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"execution_count": 11,
|
||||
"id": "d00b4385",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -365,7 +365,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"execution_count": 12,
|
||||
"id": "878bcde9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -406,7 +406,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector\n",
|
||||
"from langchain.prompts.example_selector import SemanticSimilarityExampleSelector\n",
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings"
|
||||
]
|
||||
|
@ -8,7 +8,7 @@ PromptTemplates generically have a `format` method that takes in variables and r
|
||||
The most simple implementation of this is to have a template string with some variables in it, and then format it with the incoming variables.
|
||||
More complex iterations dynamically construct the template string from few shot examples, etc.
|
||||
|
||||
For a more detailed explanation of how LangChain approaches prompts and prompt templates, see [here](prompts.md).
|
||||
For a more detailed explanation of how LangChain approaches prompts and prompt templates, see [here](/examples/prompts/prompt_management).
|
||||
|
||||
## LLMs
|
||||
Wrappers around Large Language Models (in particular, the `generate` ability of large language models) are some of the core functionality of LangChain.
|
||||
|
@ -1,138 +0,0 @@
|
||||
# Prompts
|
||||
|
||||
Prompts and all the tooling around them are integral to working with language models, and therefor
|
||||
really important to get right, from both and interface and naming perspective. This is a "design doc"
|
||||
of sorts explaining how we think about prompts and the related concepts, and why the interfaces
|
||||
for working with are the way they are in LangChain.
|
||||
|
||||
For a more code-based walkthrough of all these concept, checkout our example [here](/examples/prompts/prompt_management)
|
||||
|
||||
## Prompt
|
||||
|
||||
### Concept
|
||||
A prompt is the final string that gets fed into the language model.
|
||||
|
||||
### LangChain Implementation
|
||||
In LangChain a prompt is represented as just a string.
|
||||
|
||||
## Input Variables
|
||||
|
||||
### Concept
|
||||
Input variables are parts of a prompt that are not known until runtime, eg could be user provided.
|
||||
|
||||
### LangChain Implementation
|
||||
In LangChain input variables are just represented as a dictionary of key-value pairs, with the key
|
||||
being the variable name and the value being the variable value.
|
||||
|
||||
## Examples
|
||||
|
||||
### Concept
|
||||
Examples are basically datapoints that can be used to teach the model what to do. These can be included
|
||||
in prompts to better instruct the model on what to do.
|
||||
|
||||
### LangChain Implementation
|
||||
In LangChain examples are represented as a dictionary of key-value pairs, with the key being the feature
|
||||
(or label) name, and the value being the feature (or label) value.
|
||||
|
||||
## Example Selector
|
||||
|
||||
### Concept
|
||||
If you have a large number of examples, you may need to select which ones to include in the prompt. The
|
||||
Example Selector is the class responsible for doing so.
|
||||
|
||||
### LangChain Implementation
|
||||
|
||||
#### BaseExampleSelector
|
||||
In LangChain there is a BaseExampleSelector that exposes the following interface
|
||||
|
||||
```python
|
||||
class BaseExampleSelector:
|
||||
|
||||
def select_examples(self, input_variables: dict):
|
||||
```
|
||||
|
||||
Notice that it does not take in examples at runtime when it's selecting them - those are assumed to have been provided ahead of time.
|
||||
|
||||
#### LengthExampleSelector
|
||||
The LengthExampleSelector selects examples based on the length of the input variables.
|
||||
This is useful when you are worried about constructing a prompt that will go over the length
|
||||
of the context window. For longer inputs, it will select fewer examples to include, while for
|
||||
shorter inputs it will select more.
|
||||
|
||||
#### SemanticSimilarityExampleSelector
|
||||
The SemanticSimilarityExampleSelector selects examples based on which examples are most similar
|
||||
to the inputs. It does this by finding the examples with the embeddings that have the greatest
|
||||
cosine similarity with the inputs.
|
||||
|
||||
|
||||
## Prompt Template
|
||||
|
||||
### Concept
|
||||
The prompts that get fed into the language model are nearly always not hardcoded, but rather a combination
|
||||
of parts, including Examples and Input Variables. A prompt template is responsible
|
||||
for taking those parts and constructing a prompt.
|
||||
|
||||
### LangChain Implementation
|
||||
|
||||
#### BasePromptTemplate
|
||||
In LangChain there is a BasePromptTemplate that exposes the following interface
|
||||
|
||||
```python
|
||||
class BasePromptTemplate:
|
||||
|
||||
@property
|
||||
def input_variables(self) -> List[str]:
|
||||
|
||||
def format(self, **kwargs) -> str:
|
||||
```
|
||||
The input variables property is used to provide introspection of the PromptTemplate and know
|
||||
what inputs it expects. The format method takes in input variables and returns the prompt.
|
||||
|
||||
#### PromptTemplate
|
||||
The PromptTemplate implementation is the most simple form of a prompt template. It consists of three parts:
|
||||
- input variables: which variables this prompt template expects
|
||||
- template: the template into which these variables will be formatted
|
||||
- template format: the format of the template (eg mustache, python f-strings, etc)
|
||||
|
||||
For example, if I was making an application that took a user inputted concept and asked a language model
|
||||
to make a joke about that concept, I might use this specification for the PromptTemplate
|
||||
- input variables = `["thing"]`
|
||||
- template = `"Tell me a joke about {thing}"`
|
||||
- template format = `"f-string"`
|
||||
|
||||
#### FewShotPromptTemplate
|
||||
A FewShotPromptTemplate is a Prompt Template that includes some examples. It consists of:
|
||||
- examples OR example selector: a list of examples to use, or an Example Selector to select which examples to use
|
||||
- example prompt template: a Prompt Template responsible for taking an individual example (a dictionary) and turning it into a string to be used in the prompt.
|
||||
- prefix: the template put in the prompt before listing any examples
|
||||
- suffix: the template put in the prompt after listing any examples
|
||||
- example separator: a string separator which is used to join the prefix, the examples, and the suffix together
|
||||
|
||||
|
||||
For example, if I wanted to turn the above example into a few shot prompt, this is what it would
|
||||
look like:
|
||||
|
||||
First I would collect some examples, like
|
||||
```python
|
||||
examples = [
|
||||
{"concept": "chicken", "joke": "Why did the chicken cross the road?"},
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
I would then make sure to define a prompt template for how each example should be formatted
|
||||
when inserted into the prompt:
|
||||
```python
|
||||
prompt_template = PromptTemplate(
|
||||
input_variables=["concept", "joke"],
|
||||
template="Tell me a joke about {concept}\n{joke}"
|
||||
)
|
||||
```
|
||||
|
||||
Then, I would define the components as:
|
||||
- examples: The above examples
|
||||
- example_prompt: The above example prompt
|
||||
- prefix = `"You are a comedian telling jokes on demand."`
|
||||
- suffix = `"Tell me a joke about {concept}"`
|
||||
- input variables = `["concept"]`
|
||||
- template format = `"f-string"`
|
@ -74,7 +74,6 @@ see detailed information about the various classes, methods, and APIs.
|
||||
:name: resources
|
||||
|
||||
explanation/core_concepts.md
|
||||
explanation/prompts.md
|
||||
explanation/glossary.md
|
||||
Discord <https://discord.gg/6adMQxSpJS>
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user