forked from Archives/langchain
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
141 lines
5.1 KiB
Markdown
141 lines
5.1 KiB
Markdown
Here's the simplest example:
|
|
|
|
```python
|
|
from langchain import PromptTemplate
|
|
|
|
|
|
template = """/
|
|
You are a naming consultant for new companies.
|
|
What is a good name for a company that makes {product}?
|
|
"""
|
|
|
|
prompt = PromptTemplate.from_template(template)
|
|
prompt.format(product="colorful socks")
|
|
```
|
|
|
|
<CodeOutputBlock lang="python">
|
|
|
|
```
|
|
You are a naming consultant for new companies.
|
|
What is a good name for a company that makes colorful socks?
|
|
```
|
|
|
|
</CodeOutputBlock>
|
|
|
|
|
|
## Create a prompt template
|
|
|
|
You can create simple hardcoded prompts using the `PromptTemplate` class. Prompt templates can take any number of input variables, and can be formatted to generate a prompt.
|
|
|
|
|
|
```python
|
|
from langchain import PromptTemplate
|
|
|
|
# An example prompt with no input variables
|
|
no_input_prompt = PromptTemplate(input_variables=[], template="Tell me a joke.")
|
|
no_input_prompt.format()
|
|
# -> "Tell me a joke."
|
|
|
|
# An example prompt with one input variable
|
|
one_input_prompt = PromptTemplate(input_variables=["adjective"], template="Tell me a {adjective} joke.")
|
|
one_input_prompt.format(adjective="funny")
|
|
# -> "Tell me a funny joke."
|
|
|
|
# An example prompt with multiple input variables
|
|
multiple_input_prompt = PromptTemplate(
|
|
input_variables=["adjective", "content"],
|
|
template="Tell me a {adjective} joke about {content}."
|
|
)
|
|
multiple_input_prompt.format(adjective="funny", content="chickens")
|
|
# -> "Tell me a funny joke about chickens."
|
|
```
|
|
|
|
If you do not wish to specify `input_variables` manually, you can also create a `PromptTemplate` using `from_template` class method. `langchain` will automatically infer the `input_variables` based on the `template` passed.
|
|
|
|
```python
|
|
template = "Tell me a {adjective} joke about {content}."
|
|
|
|
prompt_template = PromptTemplate.from_template(template)
|
|
prompt_template.input_variables
|
|
# -> ['adjective', 'content']
|
|
prompt_template.format(adjective="funny", content="chickens")
|
|
# -> Tell me a funny joke about chickens.
|
|
```
|
|
|
|
You can create custom prompt templates that format the prompt in any way you want. For more information, see [Custom Prompt Templates](./custom_prompt_template.html).
|
|
|
|
|
|
<!-- TODO(shreya): Add link to Jinja -->
|
|
|
|
## Chat prompt template
|
|
|
|
[Chat Models](../models/chat) take a list of chat messages as input - this list commonly referred to as a `prompt`.
|
|
These chat messages differ from raw string (which you would pass into a [LLM](/docs/modules/model_io/models/llms) model) in that every message is associated with a `role`.
|
|
|
|
For example, in OpenAI [Chat Completion API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with the AI, human or system role. The model is supposed to follow instruction from system chat message more closely.
|
|
|
|
LangChain provides several prompt templates to make constructing and working with prompts easily. You are encouraged to use these chat related prompt templates instead of `PromptTemplate` when querying chat models to fully exploit the potential of underlying chat model.
|
|
|
|
|
|
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
|
|
|
|
|
```python
|
|
from langchain.prompts import (
|
|
ChatPromptTemplate,
|
|
PromptTemplate,
|
|
SystemMessagePromptTemplate,
|
|
AIMessagePromptTemplate,
|
|
HumanMessagePromptTemplate,
|
|
)
|
|
from langchain.schema import (
|
|
AIMessage,
|
|
HumanMessage,
|
|
SystemMessage
|
|
)
|
|
```
|
|
|
|
To create a message template associated with a role, you use `MessagePromptTemplate`.
|
|
|
|
For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
|
|
|
|
|
|
```python
|
|
template="You are a helpful assistant that translates {input_language} to {output_language}."
|
|
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
|
|
human_template="{text}"
|
|
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
|
|
```
|
|
|
|
If you wanted to construct the `MessagePromptTemplate` more directly, you could create a PromptTemplate outside and then pass it in, eg:
|
|
|
|
|
|
```python
|
|
prompt=PromptTemplate(
|
|
template="You are a helpful assistant that translates {input_language} to {output_language}.",
|
|
input_variables=["input_language", "output_language"],
|
|
)
|
|
system_message_prompt_2 = SystemMessagePromptTemplate(prompt=prompt)
|
|
|
|
assert system_message_prompt == system_message_prompt_2
|
|
```
|
|
|
|
After that, you can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model.
|
|
|
|
|
|
```python
|
|
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
|
|
|
|
# get a chat completion from the formatted messages
|
|
chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages()
|
|
```
|
|
|
|
<CodeOutputBlock lang="python">
|
|
|
|
```
|
|
[SystemMessage(content='You are a helpful assistant that translates English to French.', additional_kwargs={}),
|
|
HumanMessage(content='I love programming.', additional_kwargs={})]
|
|
```
|
|
|
|
</CodeOutputBlock>
|