mirror of
https://github.com/hwchase17/langchain
synced 2024-10-29 17:07:25 +00:00
432 lines
10 KiB
Plaintext
432 lines
10 KiB
Plaintext
|
{
|
||
|
"cells": [
|
||
|
{
|
||
|
"cell_type": "raw",
|
||
|
"id": "abf7263d-3a62-4016-b5d5-b157f92f2070",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"---\n",
|
||
|
"sidebar_position: 0\n",
|
||
|
"title: Prompt + LLM\n",
|
||
|
"---"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "9a434f2b-9405-468c-9dfd-254d456b57a6",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"The most common and valuable composition is taking:\n",
|
||
|
"\n",
|
||
|
"``PromptTemplate`` / ``ChatPromptTemplate`` -> ``LLM`` / ``ChatModel`` -> ``OutputParser``\n",
|
||
|
"\n",
|
||
|
"Almost any other chains you build will use this building block."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "93aa2c87",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## PromptTemplate + LLM\n",
|
||
|
"\n",
|
||
|
"The simplest composition is just combing a prompt and model to create a chain that takes user input, adds it to a prompt, passes it to a model, and returns the raw model input.\n",
|
||
|
"\n",
|
||
|
"Note, you can mix and match PromptTemplate/ChatPromptTemplates and LLMs/ChatModels as you like here."
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 1,
|
||
|
"id": "466b65b3",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.prompts import ChatPromptTemplate\n",
|
||
|
"from langchain.chat_models import ChatOpenAI\n",
|
||
|
"\n",
|
||
|
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {foo}\")\n",
|
||
|
"model = ChatOpenAI()\n",
|
||
|
"chain = prompt | model"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 2,
|
||
|
"id": "e3d0a6cd",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 2,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "7eb9ef50",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Often times we want to attach kwargs that'll be passed to each model call. Here's a few examples of that:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "0b1d8f88",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Attaching Stop Sequences"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 3,
|
||
|
"id": "562a06bf",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"chain = prompt | model.bind(stop=[\"\\n\"])"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 4,
|
||
|
"id": "43f5d04c",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"AIMessage(content='Why did the bear never wear shoes?', additional_kwargs={}, example=False)"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 4,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "f3eaf88a",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Attaching Function Call information"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 5,
|
||
|
"id": "f94b71b2",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"functions = [\n",
|
||
|
" {\n",
|
||
|
" \"name\": \"joke\",\n",
|
||
|
" \"description\": \"A joke\",\n",
|
||
|
" \"parameters\": {\n",
|
||
|
" \"type\": \"object\",\n",
|
||
|
" \"properties\": {\n",
|
||
|
" \"setup\": {\n",
|
||
|
" \"type\": \"string\",\n",
|
||
|
" \"description\": \"The setup for the joke\"\n",
|
||
|
" },\n",
|
||
|
" \"punchline\": {\n",
|
||
|
" \"type\": \"string\",\n",
|
||
|
" \"description\": \"The punchline for the joke\"\n",
|
||
|
" }\n",
|
||
|
" },\n",
|
||
|
" \"required\": [\"setup\", \"punchline\"]\n",
|
||
|
" }\n",
|
||
|
" }\n",
|
||
|
" ]\n",
|
||
|
"chain = prompt | model.bind(function_call= {\"name\": \"joke\"}, functions= functions)"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 6,
|
||
|
"id": "decf7710",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'joke', 'arguments': '{\\n \"setup\": \"Why don\\'t bears wear shoes?\",\\n \"punchline\": \"Because they have bear feet!\"\\n}'}}, example=False)"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 6,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"}, config={})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "9098c5ed",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## PromptTemplate + LLM + OutputParser\n",
|
||
|
"\n",
|
||
|
"We can also add in an output parser to easily trasform the raw LLM/ChatModel output into a more workable format"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 7,
|
||
|
"id": "cc194c78",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.schema.output_parser import StrOutputParser\n",
|
||
|
"\n",
|
||
|
"chain = prompt | model | StrOutputParser()"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "77acf448",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Notice that this now returns a string - a much more workable format for downstream tasks"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 8,
|
||
|
"id": "e3d69a18",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\""
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 8,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "c01864e5",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"### Functions Output Parser\n",
|
||
|
"\n",
|
||
|
"When you specify the function to return, you may just want to parse that directly"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 9,
|
||
|
"id": "ad0dd88e",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.output_parsers.openai_functions import JsonOutputFunctionsParser\n",
|
||
|
"\n",
|
||
|
"chain = (\n",
|
||
|
" prompt \n",
|
||
|
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||
|
" | JsonOutputFunctionsParser()\n",
|
||
|
")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 10,
|
||
|
"id": "1e7aa8eb",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"{'setup': \"Why don't bears like fast food?\",\n",
|
||
|
" 'punchline': \"Because they can't catch it!\"}"
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 10,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 11,
|
||
|
"id": "d4aa1a01",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.output_parsers.openai_functions import JsonKeyOutputFunctionsParser\n",
|
||
|
"\n",
|
||
|
"chain = (\n",
|
||
|
" prompt \n",
|
||
|
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||
|
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||
|
")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 12,
|
||
|
"id": "8b6df9ba",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"\"Why don't bears wear shoes?\""
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 12,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke({\"foo\": \"bears\"})"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "023fbccb-ef7d-489e-a9ba-f98e17283d51",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"## Simplifying input\n",
|
||
|
"\n",
|
||
|
"To make invocation even simpler, we can add a `RunnableMap` to take care of creating the prompt input dict for us:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 13,
|
||
|
"id": "9601c0f0-71f9-4bd4-a672-7bd04084b018",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"from langchain.schema.runnable import RunnableMap, RunnablePassthrough\n",
|
||
|
"\n",
|
||
|
"map_ = RunnableMap({\"foo\": RunnablePassthrough()})\n",
|
||
|
"chain = (\n",
|
||
|
" map_ \n",
|
||
|
" | prompt\n",
|
||
|
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||
|
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||
|
")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 14,
|
||
|
"id": "7ec4f154-fda5-4847-9220-41aa902fdc33",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"\"Why don't bears wear shoes?\""
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 14,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke(\"bears\")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "markdown",
|
||
|
"id": "def00bfe-0f83-4805-8c8f-8a53f99fa8ea",
|
||
|
"metadata": {},
|
||
|
"source": [
|
||
|
"Since we're composing our map with another Runnable, we can even use some syntactic sugar and just use a dict:"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 21,
|
||
|
"id": "7bf3846a-02ee-41a3-ba1b-a708827d4f3a",
|
||
|
"metadata": {},
|
||
|
"outputs": [],
|
||
|
"source": [
|
||
|
"chain = (\n",
|
||
|
" {\"foo\": RunnablePassthrough()} \n",
|
||
|
" | prompt\n",
|
||
|
" | model.bind(function_call= {\"name\": \"joke\"}, functions= functions) \n",
|
||
|
" | JsonKeyOutputFunctionsParser(key_name=\"setup\")\n",
|
||
|
")"
|
||
|
]
|
||
|
},
|
||
|
{
|
||
|
"cell_type": "code",
|
||
|
"execution_count": 22,
|
||
|
"id": "e566d6a1-538d-4cb5-a210-a63e082e4c74",
|
||
|
"metadata": {},
|
||
|
"outputs": [
|
||
|
{
|
||
|
"data": {
|
||
|
"text/plain": [
|
||
|
"\"Why don't bears like fast food?\""
|
||
|
]
|
||
|
},
|
||
|
"execution_count": 22,
|
||
|
"metadata": {},
|
||
|
"output_type": "execute_result"
|
||
|
}
|
||
|
],
|
||
|
"source": [
|
||
|
"chain.invoke(\"bears\")"
|
||
|
]
|
||
|
}
|
||
|
],
|
||
|
"metadata": {
|
||
|
"kernelspec": {
|
||
|
"display_name": "Python 3 (ipykernel)",
|
||
|
"language": "python",
|
||
|
"name": "python3"
|
||
|
},
|
||
|
"language_info": {
|
||
|
"codemirror_mode": {
|
||
|
"name": "ipython",
|
||
|
"version": 3
|
||
|
},
|
||
|
"file_extension": ".py",
|
||
|
"mimetype": "text/x-python",
|
||
|
"name": "python",
|
||
|
"nbconvert_exporter": "python",
|
||
|
"pygments_lexer": "ipython3",
|
||
|
"version": "3.9.1"
|
||
|
}
|
||
|
},
|
||
|
"nbformat": 4,
|
||
|
"nbformat_minor": 5
|
||
|
}
|