langchain/docs/extras/guides/expression_language/interface.ipynb
2023-07-29 12:49:11 -07:00

270 lines
6.3 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "9a9acd2e",
"metadata": {},
"source": [
"# Interface\n",
"\n",
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
"\n",
"- `stream`: stream back chunks of the response\n",
"- `invoke`: call the chain on an input\n",
"- `batch`: call the chain on a list of inputs\n",
"\n",
"These also have corresponding async methods:\n",
"\n",
"- `astream`: stream back chunks of the response async\n",
"- `ainvoke`: call the chain on an input async\n",
"- `abatch`: call the chain on a list of inputs async\n",
"\n",
"The type of the input varies by component. For a prompt it is a dictionary, for a retriever it is a single string, for a model either a single string, a list of chat messages, or a PromptValue.\n",
"\n",
"The output type also varies by component. For an LLM it is a string, for a ChatModel it's a ChatMessage, for a prompt it's a PromptValue, for a retriever it's a list of documents.\n",
"\n",
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "466b65b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3c634ef0",
"metadata": {},
"outputs": [],
"source": [
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d1850a1f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "56d0669f",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "markdown",
"id": "daf2b2b2",
"metadata": {},
"source": [
"## Stream"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bea9639d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a bear-themed joke for you:\n",
"\n",
"Why don't bears wear shoes?\n",
"\n",
"Because they have bear feet!"
]
}
],
"source": [
"for s in chain.stream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "cbf1c782",
"metadata": {},
"source": [
"## Invoke"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "470e483f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they already have bear feet!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "88f0c279",
"metadata": {},
"source": [
"## Batch"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "9685de67",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Why don't bears ever wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
]
},
{
"cell_type": "markdown",
"id": "b960cbfe",
"metadata": {},
"source": [
"## Async Stream"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "ea35eee4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why don't bears wear shoes?\n",
"\n",
"Because they have bear feet!"
]
}
],
"source": [
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "04cb3324",
"metadata": {},
"source": [
"## Async Invoke"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ef8c9b20",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Sure, here you go:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chain.ainvoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "3da288d5",
"metadata": {},
"source": [
"## Async Batch"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "eba2a103",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chain.abatch([{\"topic\": \"bears\"}])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}