You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/expression_language/how_to/inspect.ipynb

224 lines
6.7 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "8c5eb99a",
"metadata": {},
"source": [
"# Inspect your runnables\n",
"\n",
"Once you create a runnable with LCEL, you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n",
"\n",
"First, let's create an example LCEL. We will create one that does retrieval"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d816e954",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai faiss-cpu tiktoken"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a88f4b24",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "139228c2",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "70e3fe93",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "849e3c42",
"metadata": {},
"source": [
"## Get a graph\n",
"\n",
"You can get a graph of the runnable"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2448b6c2",
"metadata": {},
"outputs": [],
"source": [
"chain.get_graph()"
]
},
{
"cell_type": "markdown",
"id": "065b02fb",
"metadata": {},
"source": [
"## Print a graph\n",
"\n",
"While that is not super legible, you can print it to get a display that's easier to understand"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d5ab1515",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" +---------------------------------+ \n",
" | Parallel<context,question>Input | \n",
" +---------------------------------+ \n",
" ** ** \n",
" *** *** \n",
" ** ** \n",
"+----------------------+ +-------------+ \n",
"| VectorStoreRetriever | | Passthrough | \n",
"+----------------------+ +-------------+ \n",
" ** ** \n",
" *** *** \n",
" ** ** \n",
" +----------------------------------+ \n",
" | Parallel<context,question>Output | \n",
" +----------------------------------+ \n",
" * \n",
" * \n",
" * \n",
" +--------------------+ \n",
" | ChatPromptTemplate | \n",
" +--------------------+ \n",
" * \n",
" * \n",
" * \n",
" +------------+ \n",
" | ChatOpenAI | \n",
" +------------+ \n",
" * \n",
" * \n",
" * \n",
" +-----------------+ \n",
" | StrOutputParser | \n",
" +-----------------+ \n",
" * \n",
" * \n",
" * \n",
" +-----------------------+ \n",
" | StrOutputParserOutput | \n",
" +-----------------------+ \n"
]
}
],
"source": [
"chain.get_graph().print_ascii()"
]
},
{
"cell_type": "markdown",
"id": "2babf851",
"metadata": {},
"source": [
"## Get the prompts\n",
"\n",
"An important part of every chain is the prompts that are used. You can get the prompts present in the chain:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "34b2118d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.get_prompts()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed965769",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}