mirror of
https://github.com/hwchase17/langchain
synced 2024-10-31 15:20:26 +00:00
200 lines
5.6 KiB
Plaintext
200 lines
5.6 KiB
Plaintext
|
{
|
|||
|
"cells": [
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"id": "b022ab74-794d-4c54-ad47-ff9549ddb9d2",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"# Use RunnableMaps\n",
|
|||
|
"\n",
|
|||
|
"RunnableMaps make it easy to execute multiple Runnables in parallel, and to return the output of these Runnables as a map."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 5,
|
|||
|
"id": "7e1873d6-d4b6-43ac-96a1-edcf178201e0",
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"{'joke': AIMessage(content=\"Why don't bears wear shoes? \\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
|
|||
|
" 'poem': AIMessage(content=\"In twilight's embrace, a bear's gentle lumber,\\nSilent strength, nature's awe, a humble slumber.\", additional_kwargs={}, example=False)}"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": 5,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"from langchain.chat_models import ChatOpenAI\n",
|
|||
|
"from langchain.prompts import ChatPromptTemplate\n",
|
|||
|
"from langchain.schema.runnable import RunnableMap\n",
|
|||
|
"\n",
|
|||
|
"\n",
|
|||
|
"model = ChatOpenAI()\n",
|
|||
|
"joke_chain = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\") | model\n",
|
|||
|
"poem_chain = ChatPromptTemplate.from_template(\"write a 2-line poem about {topic}\") | model\n",
|
|||
|
"\n",
|
|||
|
"map_chain = RunnableMap({\"joke\": chain1, \"poem\": chain2,})\n",
|
|||
|
"\n",
|
|||
|
"map_chain.invoke({\"topic\": \"bear\"})"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"id": "df867ae9-1cec-4c9e-9fef-21969b206af5",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Manipulating outputs/inputs\n",
|
|||
|
"Maps can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 4,
|
|||
|
"id": "267d1460-53c1-4fdb-b2c3-b6a1eb7fccff",
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"data": {
|
|||
|
"text/plain": [
|
|||
|
"'Harrison worked at Kensho.'"
|
|||
|
]
|
|||
|
},
|
|||
|
"execution_count": 4,
|
|||
|
"metadata": {},
|
|||
|
"output_type": "execute_result"
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"from langchain.embeddings import OpenAIEmbeddings\n",
|
|||
|
"from langchain.schema.output_parser import StrOutputParser\n",
|
|||
|
"from langchain.schema.runnable import RunnablePassthrough\n",
|
|||
|
"from langchain.vectorstores import FAISS\n",
|
|||
|
"\n",
|
|||
|
"vectorstore = FAISS.from_texts([\"harrison worked at kensho\"], embedding=OpenAIEmbeddings())\n",
|
|||
|
"retriever = vectorstore.as_retriever()\n",
|
|||
|
"template = \"\"\"Answer the question based only on the following context:\n",
|
|||
|
"{context}\n",
|
|||
|
"\n",
|
|||
|
"Question: {question}\n",
|
|||
|
"\"\"\"\n",
|
|||
|
"prompt = ChatPromptTemplate.from_template(template)\n",
|
|||
|
"\n",
|
|||
|
"retrieval_chain = (\n",
|
|||
|
" {\"context\": retriever, \"question\": RunnablePassthrough()} \n",
|
|||
|
" | prompt \n",
|
|||
|
" | model \n",
|
|||
|
" | StrOutputParser()\n",
|
|||
|
")\n",
|
|||
|
"\n",
|
|||
|
"retrieval_chain.invoke(\"where did harrison work?\")"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"Here the input to prompt is expected to be a map with keys \"context\" and \"question\". The user input is just the question. So we need to get the context using our retriever and passthrough the user input under the \"question\" key.\n",
|
|||
|
"\n",
|
|||
|
"Note that when composing a RunnableMap when another Runnable we don't even need to wrap our dictuionary in the RunnableMap class — the type conversion is handled for us."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "markdown",
|
|||
|
"id": "833da249-c0d4-4e5b-b3f8-cab549f0f7e1",
|
|||
|
"metadata": {},
|
|||
|
"source": [
|
|||
|
"## Parallelism\n",
|
|||
|
"\n",
|
|||
|
"RunnableMaps are also useful for running independent processes in parallel, since each Runnable in the map is executed in parallel. For example, we can see our earlier `joke_chain`, `poem_chain` and `map_chain` all have about the same runtime, even though `map_chain` executes both of the other two."
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 6,
|
|||
|
"id": "38e47834-45af-4281-991f-86f150001510",
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"958 ms ± 402 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"%%timeit\n",
|
|||
|
"\n",
|
|||
|
"joke_chain.invoke({\"topic\": \"bear\"})"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 7,
|
|||
|
"id": "d0cd40de-b37e-41fa-a2f6-8aaa49f368d6",
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"1.22 s ± 508 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"%%timeit\n",
|
|||
|
"\n",
|
|||
|
"poem_chain.invoke({\"topic\": \"bear\"})"
|
|||
|
]
|
|||
|
},
|
|||
|
{
|
|||
|
"cell_type": "code",
|
|||
|
"execution_count": 8,
|
|||
|
"id": "799894e1-8e18-4a73-b466-f6aea6af3920",
|
|||
|
"metadata": {},
|
|||
|
"outputs": [
|
|||
|
{
|
|||
|
"name": "stdout",
|
|||
|
"output_type": "stream",
|
|||
|
"text": [
|
|||
|
"1.15 s ± 119 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"source": [
|
|||
|
"%%timeit\n",
|
|||
|
"\n",
|
|||
|
"map_chain.invoke({\"topic\": \"bear\"})"
|
|||
|
]
|
|||
|
}
|
|||
|
],
|
|||
|
"metadata": {
|
|||
|
"kernelspec": {
|
|||
|
"display_name": "Python 3 (ipykernel)",
|
|||
|
"language": "python",
|
|||
|
"name": "python3"
|
|||
|
},
|
|||
|
"language_info": {
|
|||
|
"codemirror_mode": {
|
|||
|
"name": "ipython",
|
|||
|
"version": 3
|
|||
|
},
|
|||
|
"file_extension": ".py",
|
|||
|
"mimetype": "text/x-python",
|
|||
|
"name": "python",
|
|||
|
"nbconvert_exporter": "python",
|
|||
|
"pygments_lexer": "ipython3",
|
|||
|
"version": "3.9.1"
|
|||
|
}
|
|||
|
},
|
|||
|
"nbformat": 4,
|
|||
|
"nbformat_minor": 5
|
|||
|
}
|