Update documentation for prompts (#8381)

* Documentation to favor creation without declaring input_variables
* Cut out obvious examples, but add more description in a few places

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
pull/8781/head
Eugene Yurtsev 1 year ago committed by GitHub
parent 91a0817e39
commit 19dfe166c9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -3,10 +3,12 @@ sidebar_position: 0
---
# Prompts
The new way of programming models is through prompts.
A **prompt** refers to the input to the model.
This input is often constructed from multiple components.
LangChain provides several classes and functions to make constructing and working with prompts easy.
A prompt for a language model is a set of instructions or input provided by a user to
guide the model's response, helping it understand the context and generate relevant
and coherent language-based output, such as answering questions, completing sentences,
or engaging in a conversation.
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrize model inputs
LangChain provides several classes and functions to help construct and work with prompts.
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrized model inputs
- [Example selectors](/docs/modules/model_io/prompts/example_selectors/): Dynamically select examples to include in prompts

@ -4,18 +4,15 @@ sidebar_position: 0
# Prompt templates
Language models take text as input - that text is commonly referred to as a prompt.
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
LangChain provides several classes and functions to make constructing and working with prompts easy.
Prompt templates are pre-defined recipes for generating prompts for language models.
## What is a prompt template?
A template may include instructions, few shot examples, and specific context and
questions appropriate for a given task.
A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.
LangChain provides tooling to create and work with prompt templates.
A prompt template can contain:
- instructions to the language model,
- a set of few shot examples to help the language model generate a better response,
- a question to the language model.
LangChain strives to create model agnostic templates to make it easy to reuse
existing templates across different language models.
import GetStarted from "@snippets/modules/model_io/prompts/prompt_templates/get_started.mdx"

@ -1,140 +1,115 @@
Here's the simplest example:
Typically, language models expect the prompt to either be a string or else a list of chat messages.
```python
from langchain import PromptTemplate
## Prompt template
Use `PromptTemplate` to create a template for a string prompt.
By default, `PromptTemplate` uses [Python's str.format](https://docs.python.org/3/library/stdtypes.html#str.format)
syntax for templating; however other templating syntax is available (e.g., `jinja2`).
template = """\
You are a naming consultant for new companies.
What is a good name for a company that makes {product}?
"""
```python
from langchain import PromptTemplate
prompt = PromptTemplate.from_template(template)
prompt.format(product="colorful socks")
prompt_template = PromptTemplate.from_template(
"Tell me a {adjective} joke about {content}."
)
prompt_template.format(adjective="funny", content="chickens")
```
<CodeOutputBlock lang="python">
```
You are a naming consultant for new companies.
What is a good name for a company that makes colorful socks?
"Tell me a funny joke about chickens."
```
</CodeOutputBlock>
## Create a prompt template
You can create simple hardcoded prompts using the `PromptTemplate` class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.
The template supports any number of variables, including no variables:
```python
from langchain import PromptTemplate
# An example prompt with no input variables
no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")
no_input_prompt.format()
# -> "Tell me a joke."
# An example prompt with one input variable
one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")
one_input_prompt.format(adjective="funny")
# -> "Tell me a funny joke."
# An example prompt with multiple input variables
multiple_input_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}."
prompt_template = PromptTemplate.from_template(
"Tell me a joke"
)
multiple_input_prompt.format(adjective="funny", content="chickens")
# -> "Tell me a funny joke about chickens."
prompt_template.format()
```
If you do not wish to specify `input_variables` manually, you can also create a `PromptTemplate` using `from_template` class method. `langchain` will automatically infer the `input_variables` based on the `template` passed.
For additional validation, specify `input_variables` explicitly. These variables
will be compared against the variables present in the template string during instantiation, raising an exception if
there is a mismatch; for example,
```python
template = "Tell me a {adjective} joke about {content}."
from langchain import PromptTemplate
prompt_template = PromptTemplate.from_template(template)
prompt_template.input_variables
# -> ['adjective', 'content']
prompt_template.format(adjective="funny", content="chickens")
# -> Tell me a funny joke about chickens.
invalid_prompt = PromptTemplate(
input_variables=["adjective"],
template="Tell me a {adjective} joke about {content}."
)
```
You can create custom prompt templates that format the prompt in any way you want. For more information, see [Custom Prompt Templates](./custom_prompt_template.html).
You can create custom prompt templates that format the prompt in any way you want.
For more information, see [Custom Prompt Templates](./custom_prompt_template.html).
<!-- TODO(shreya): Add link to Jinja -->
## Chat prompt template
[Chat Models](../models/chat) take a list of chat messages as input - this list commonly referred to as a `prompt`.
These chat messages differ from raw string (which you would pass into a [LLM](/docs/modules/model_io/models/llms) model) in that every message is associated with a `role`.
For example, in OpenAI [Chat Completion API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.
LangChain provides several prompt templates to make constructing and working with prompts easy. You are encouraged to use these chat related prompt templates instead of `PromptTemplate` when querying chat models to fully utilize the potential of the underlying chat model.
The prompt to [Chat Models](../models/chat) is a list of chat messages.
Each chat message is associated with content, and an additional parameter called `role`.
For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.
Create a chat prompt template like this:
```python
from langchain.prompts import (
ChatPromptTemplate,
PromptTemplate,
SystemMessagePromptTemplate,
AIMessagePromptTemplate,
HumanMessagePromptTemplate,
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
])
messages = template.format_messages(
name="Bob",
user_input="What is your name?"
)
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
```
To create a message template associated with a role, you use `MessagePromptTemplate`.
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
```python
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
```
If you wanted to construct the `MessagePromptTemplate` more directly, you could create a PromptTemplate outside and then pass it in, eg:
`ChatPromptTemplate.from_messages` accepts a variety of message representations.
For example, in addition to using the 2-tuple representation of (type, content) used
above, you could pass in an instance of `MessagePromptTemplate` or `BaseMessage`.
```python
prompt=PromptTemplate(
template="You are a helpful assistant that translates {input_language} to {output_language}.",
input_variables=["input_language", "output_language"],
from langchain.prompts import ChatPromptTemplate
from langchain.prompts.chat import SystemMessage, HumanMessagePromptTemplate
template = ChatPromptTemplate.from_messages(
[
SystemMessage(
content=(
"You are a helpful assistant that re-writes the user's text to "
"sound more upbeat."
)
),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)
assert system_message_prompt == system_message_prompt_2
```
After that, you can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
```python
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
from langchain.chat_models import ChatOpenAI
# get a chat completion from the formatted messages
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
llm = ChatOpenAI()
llm(template.format_messages(text='i dont like eating tasty things.'))
```
<CodeOutputBlock lang="python">
```
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
HumanMessage(content='I love programming.', additional_kwargs={})]
AIMessage(content='I absolutely adore indulging in delicious treats!', additional_kwargs={}, example=False)
```
</CodeOutputBlock>
This provides you with a lot of flexibility in how you construct your chat prompts.

Loading…
Cancel
Save