add usage readmes

harrison/add-usage-readmes
Harrison Chase 1 year ago
parent c9f16f6d84
commit 0d3069589b

@ -0,0 +1,35 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import APIChain
llm = ...
api_docs = ...
prompt = load_from_hub('api/api_response/<file-name>')
chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_response_prompt=prompt)
```

@ -0,0 +1,35 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import APIChain
llm = ...
api_docs = ...
prompt = load_from_hub('api/api_url/<file-name>')
chain = APIChain.from_llm_and_api_docs(llm, api_docs, api_url_prompt=prompt)
```

@ -0,0 +1,34 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import ConversationChain
llm = ...
prompt = load_from_hub('conversation/basic/<file-name>')
chain = ConversationChain(llm=llm, prompt=prompt)
```

@ -0,0 +1,36 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import ConversationChain
from langchain.chains.conversation.memory import ConversationSummaryMemory
llm = ...
prompt = load_from_hub('conversation/summarize/<file-name>')
memory = ConversationSummaryMemory(llm=llm, prompt=prompt)
chain = ConversationChain(llm=llm, memory=memory)
```

@ -1,27 +1,34 @@
# Hello World
> A simple prompt as an example
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Configuration
- input_variables: []
- There are no inputs
- output_parser: null
- There is no output parsing needed
- template: 'Say hello world.'
- Just a simple hello.
template_format: f-string
- We use standard f-string formatting here.
## Usage
Ex:
```python3
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.llms import OpenAI
from langchain.prompts.loading import load_prompt
from langchain.chains import LLMChain
llm = OpenAI(temperature=0.9)
# prompt = load_from_hub("hello-world/prompt.yaml")
output = llm(prompt.format())
print(output)
llm = ...
prompt = load_from_hub('hello-world/<file-name>')
chain = LLMChain(llm=llm, prompt=prompt)
```

@ -1,9 +1,34 @@
A prompt to generate bash commands given natural language objectives.
<!-- Add a template for READMEs that capture the utility of prompts -->
Inputs:
1. question
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
1. `chains/llm_bash/LLMBashChain`
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import LLMBash
llm = ...
prompt = load_from_hub('llm_bash/<file-name>')
chain = LLMBash(llm=llm, prompt=prompt)
```

@ -0,0 +1,34 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import LLMMathChain
llm = ...
prompt = load_from_hub('llm_math/<file-name>')
chain = LLMMathChain(llm=llm, prompt=prompt)
```

@ -1,4 +1,36 @@
Answer a math question in python.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import PALChain
llm = ...
stop = ...
get_answer_expr = ...
prompt = load_from_hub('pal/<file-name>')
chain = PALChain(llm=llm, prompt=prompt, stop=stop, get_answer_expr=get_answer_expr)
```
Inputs:
1. question

@ -1,3 +1,34 @@
This prompt is a basic implementation of a question answering prompt for a map-reduce chain.
Specifically, it is the "map" prompt that gets applied to all input documents.
It takes in a single variable for the document (`context`) and then a variable for the question (`question`)
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/map_reduce/question/<file-name>')
chain = load_qa_chain(llm, chain_type="map_reduce", question_prompt=prompt)
```

@ -1,3 +1,34 @@
This prompt is a basic implementation of a question answering prompt for a map-reduce chain.
Specifically, it is the "reduce" prompt that gets applied documents after they have initially been asked for an answer.
It takes in a single variable for the initial responses (`summaries`) and then a variable for the question (`question`)
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/map_reduce/reduce/<file-name>')
chain = load_qa_chain(llm, chain_type="map_reduce", combine_prompt=prompt)
```

@ -1,6 +1,34 @@
This prompt refines an existing answer to a question given context, with additional instructions to update the sources used.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/refine/<file-name>')
chain = load_qa_chain(llm, chain_type="refine", refine_prompt=prompt)
```
The inputs are:
1. question
2. existing_answer
3. context_str

@ -1,6 +1,34 @@
This is a question answering prompt for a "stuff" chain that cites sources.
It takes variables for all the documents (`context`) and then a variable for the question (`question`)
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
1. `chains/qa_with_sources`
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.question_answering import load_qa_chain
llm = ...
prompt = load_from_hub('qa/stuff/<file-name>')
chain = load_qa_chain(llm, chain_type="stuff", prompt=prompt)
```

@ -1,6 +1,35 @@
This prompt takes a natural language question, converts the question into a SQL query, executes the query and gets the result, and finally returns the final answer to the user.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
1. `chains/sql_database.py`
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import SQLDatabaseChain
llm = ...
database = ...
prompt = load_from_hub('sql_query/language_to_sql_output/<file-name>')
chain = SQLDatabaseChain(llm=llm, database=database, prompt=prompt)
```

@ -1,6 +1,35 @@
This prompt takes a question and a list of potential tables, and determines which tables in a database would be most relevant.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
1. `chains/sql_database.py`
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains import SQLDatabaseSequentialChain
llm = ...
database = ...
prompt = load_from_hub('sql_query/relevant_tables/<file-name>')
chain = SQLDatabaseSequentialChain.from_llm(llm, database, decider_prompt=prompt)
```

@ -1,9 +0,0 @@
# Summarize
This is a type of chain to distill large amounts of information into a small amount.
There are three types of summarize chains:
1. Stuff: This is a simple chain to stuff all the text from each document into one propmt. This is limited by token window so this approach only works for smaller amounts of data.
2. Map Reduce: This maps a summarize prompt onto each data chunk and then combines all the outputs to finally reduce using a summarization prompt.
3. Refine: This iteratively passes in each chunk of data and update a continously update an evolving summary to be more accurate based on the new chunk of data given.
## Usage

@ -1,2 +0,0 @@
# Map Reduce
This maps a summarize prompt onto each data chunk and then combines all the outputs to finally reduce using a summarization prompt.

@ -0,0 +1,34 @@
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/map_reduce/map/<file-name>')
chain = load_summarize_chain(llm, chain_type="map_reduce", map_prompt=prompt)
```

@ -1,2 +1,34 @@
# Refine
This iteratively passes in each chunk of data and update a continously update an evolving summary to be more accurate based on the new chunk of data given.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/refine/<file-name>')
chain = load_summarize_chain(llm, chain_type="refine", refine_prompt=prompt)
```

@ -1,13 +0,0 @@
# Prompt config for a chain?
input_variables: ['text']
output_parser: null
# what primitives do we allow?
template: [
map:
input: 'text'
prompt: 'prompt.yaml',
reduce:
prompt: 'promptSummarize.yaml'
]
template_format: f-string

@ -1,10 +0,0 @@
input_variables: [text]
output_parser: null
template: 'Write a concise summary of the following:
{text}
CONCISE SUMMARY:'
template_format: f-string

@ -1,2 +1,34 @@
# Stuff
This is a simple chain to stuff all the text from each document into one propmt. This is limited by token window so this approach only works for smaller amounts of data.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.chains.summarize import load_summarize_chain
llm = ...
prompt = load_from_hub('summarize/stuff/<file-name>')
chain = load_summarize_chain(llm, chain_type="stuff", prompt=prompt)
```

@ -1,19 +1,35 @@
This prompt is a simple prompt that answers a question given some context. If the answer isn't provided in the context, then the prompt suggests the LLM to return `I don't know`.
<!-- Add a template for READMEs that capture the utility of prompts -->
# Description of {{prompt}}
{{High level text description of the prompt, including use cases.}}
## Compatible Chains
Below is a list of chains we expect this prompt to be compatible with.
1. {{Chain Name}}: {{Path to chain in module}}
2. ...
## Inputs
This is a description of the inputs that the prompt expects.
1. {{input_var}}: {{Description}}
2. ...
## Usage
```
Below is a code snippet for how to use the prompt.
```python
from langchain.prompts import load_from_hub
from langchain.llms import OpenAI
from langchain.prompts.loading import load_prompt
from langchain.chains import VectorDBQA
llm = OpenAI(temperature=0.9)
prompt = load_from_hub("vector_db_qa/prompt.py")
output = llm(prompt.format(context="...", question="..."))
print(output)
llm = ...
vectorstore = ...
prompt = load_from_hub('vector_db_qa/<file-name>')
chain = VectorDBQA.from_llm(llm, prompt=prompt, vectorstore=vectorstore)
```
## Compatible Chains
1. `chains/vector_db_qa`
Loading…
Cancel
Save