You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/docs/docs/use_cases/question_answering/chat_history.ipynb

568 lines
25 KiB
Plaintext

This file contains ambiguous Unicode characters!

This file contains ambiguous Unicode characters that may be confused with others in your current locale. If your use case is intentional and legitimate, you can safely ignore this warning. Use the Escape button to highlight these characters.

{
"cells": [
{
"cell_type": "raw",
"id": "023635f2-71cf-43f2-a2e2-a7b4ced30a74",
"metadata": {},
"source": [
"---\n",
"sidebar_position: 2\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "86fc5bb2-017f-434e-8cd6-53ab214a5604",
"metadata": {},
"source": [
"# Add chat history\n",
"\n",
"In many Q&A applications we want to allow the user to have a back-and-forth conversation, meaning the application needs some sort of \"memory\" of past questions and answers, and some logic for incorporating those into its current thinking.\n",
"\n",
"In this guide we focus on **adding logic for incorporating historical messages.** Further details on chat history management is [covered here](/docs/expression_language/how_to/message_history).\n",
"\n",
"We'll work off of the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Quickstart](/docs/use_cases/question_answering/quickstart). We'll need to update two things about our existing app:\n",
"\n",
"1. **Prompt**: Update our prompt to support historical messages as an input.\n",
"2. **Contextualizing questions**: Add a sub-chain that takes the latest user question and reformulates it in the context of the chat history. This is needed in case the latest question references some context from past messages. For example, if a user asks a follow-up question like \"Can you elaborate on the second point?\", this cannot be understood without the context of the previous message. Therefore we can't effectively perform retrieval with a question like this."
]
},
{
"cell_type": "markdown",
"id": "487d8d79-5ee9-4aa4-9fdf-cd5f4303e099",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"### Dependencies\n",
"\n",
"We'll use an OpenAI chat model and embeddings and a Chroma vector store in this walkthrough, but everything shown here works with any [ChatModel](/docs/modules/model_io/chat/) or [LLM](/docs/modules/model_io/llms/), [Embeddings](/docs/modules/data_connection/text_embedding/), and [VectorStore](/docs/modules/data_connection/vectorstores/) or [Retriever](/docs/modules/data_connection/retrievers/). \n",
"\n",
"We'll use the following packages:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "28d272cd-4e31-40aa-bbb4-0be0a1f49a14",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-community langchainhub langchain-openai chromadb bs4"
]
},
{
"cell_type": "markdown",
"id": "51ef48de-70b6-4f43-8e0b-ab9b84c9c02a",
"metadata": {},
"source": [
"We need to set environment variable `OPENAI_API_KEY`, which can be done directly or loaded from a `.env` file like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "143787ca-d8e6-4dc9-8281-4374f4d71720",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
"\n",
"# import dotenv\n",
"\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"id": "1665e740-ce01-4f09-b9ed-516db0bd326f",
"metadata": {},
"source": [
"### LangSmith\n",
"\n",
"Many of the applications you build with LangChain will contain multiple steps with multiple invocations of LLM calls. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly is going on inside your chain or agent. The best way to do this is with [LangSmith](https://smith.langchain.com).\n",
"\n",
"Note that LangSmith is not needed, but it is helpful. If you do want to use LangSmith, after you sign up at the link above, make sure to set your environment variables to start logging traces:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "07411adb-3722-4f65-ab7f-8f6f57663d11",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "fa6ba684-26cf-4860-904e-a4d51380c134",
"metadata": {},
"source": [
"## Chain without chat history\n",
"\n",
"Here is the Q&A app we built over the [LLM Powered Autonomous Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) blog post by Lilian Weng in the [Quickstart](/docs/use_cases/question_answering/quickstart):"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d8a913b1-0eea-442a-8a64-ec73333f104b",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain import hub\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "820244ae-74b4-4593-b392-822979dd91b8",
"metadata": {},
"outputs": [],
"source": [
"# Load, chunk and index the contents of the blog.\n",
"loader = WebBaseLoader(\n",
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
" bs_kwargs=dict(\n",
" parse_only=bs4.SoupStrainer(\n",
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
" )\n",
" ),\n",
")\n",
"docs = loader.load()\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"splits = text_splitter.split_documents(docs)\n",
"vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n",
"\n",
"# Retrieve and generate using the relevant snippets of the blog.\n",
"retriever = vectorstore.as_retriever()\n",
"prompt = hub.pull(\"rlm/rag-prompt\")\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"\n",
"def format_docs(docs):\n",
" return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
"\n",
"\n",
"rag_chain = (\n",
" {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "22206dfd-d673-4fa4-887f-349d273cb3f2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task Decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents to plan and execute tasks more efficiently by dividing them into manageable subgoals. Task decomposition can be achieved through various methods, including using prompting techniques, task-specific instructions, or human inputs.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rag_chain.invoke(\"What is Task Decomposition?\")"
]
},
{
"cell_type": "markdown",
"id": "776ae958-cbdc-4471-8669-c6087436f0b5",
"metadata": {},
"source": [
"## Contextualizing the question\n",
"\n",
"First we'll need to define a sub-chain that takes historical messages and the latest user question, and reformulates the question if it makes reference to any information in the historical information.\n",
"\n",
"We'll use a prompt that includes a `MessagesPlaceholder` variable under the name \"chat_history\". This allows us to pass in a list of Messages to the prompt using the \"chat_history\" input key, and these messages will be inserted after the system message and before the human message containing the latest question.\n",
"\n",
"Note that we leverage a helper function [create_history_aware_retriever](https://api.python.langchain.com/en/latest/chains/langchain.chains.history_aware_retriever.create_history_aware_retriever.html) for this step, which manages the case where `chat_history` is empty, and otherwise applies `prompt | llm | StrOutputParser() | retriever` in sequence.\n",
"\n",
"`create_history_aware_retriever` constructs a chain that accepts keys `input` and `chat_history` as input, and has the same output schema as a retriever."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2b685428-8b82-4af1-be4f-7232c5d55b73",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_history_aware_retriever\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"\n",
"contextualize_q_system_prompt = \"\"\"Given a chat history and the latest user question \\\n",
"which might reference context in the chat history, formulate a standalone question \\\n",
"which can be understood without the chat history. Do NOT answer the question, \\\n",
"just reformulate it if needed and otherwise return it as is.\"\"\"\n",
"contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", contextualize_q_system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, retriever, contextualize_q_prompt\n",
")"
]
},
{
"cell_type": "markdown",
"id": "23cbd8d7-7162-4fb0-9e69-67ea4d4603a5",
"metadata": {},
"source": [
"This chain prepends a rephrasing of the input query to our retriever, so that the retrieval incorporates the context of the conversation."
]
},
{
"cell_type": "markdown",
"id": "42a47168-4a1f-4e39-bd2d-d5b03609a243",
"metadata": {},
"source": [
"## Chain with chat history\n",
"\n",
"And now we can build our full QA chain. \n",
"\n",
"Here we use [create_stuff_documents_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.combine_documents.stuff.create_stuff_documents_chain.html) to generate a `question_answer_chain`, with input keys `context`, `chat_history`, and `input`-- it accepts the retrieved context alongside the conversation history and query to generate an answer.\n",
"\n",
"We build our final `rag_chain` with [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html). This chain applies the `history_aware_retriever` and `question_answer_chain` in sequence, retaining intermediate outputs such as the retrieved context for convenience. It has input keys `input` and `chat_history`, and includes `input`, `chat_history`, `context`, and `answer` in its output."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "66f275f3-ddef-4678-b90d-ee64576878f9",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"\n",
"qa_system_prompt = \"\"\"You are an assistant for question-answering tasks. \\\n",
"Use the following pieces of retrieved context to answer the question. \\\n",
"If you don't know the answer, just say that you don't know. \\\n",
"Use three sentences maximum and keep the answer concise.\\\n",
"\n",
"{context}\"\"\"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", qa_system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
"\n",
"rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0005810b-1b95-4666-a795-08d80e478b83",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Task decomposition can be done in several common ways, including using Language Model (LLM) with simple prompting like \"Steps for XYZ\" or \"What are the subgoals for achieving XYZ?\", providing task-specific instructions tailored to the specific task at hand, or incorporating human inputs to guide the decomposition process. These methods help in breaking down complex tasks into smaller, more manageable subtasks for efficient execution.\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"chat_history = []\n",
"\n",
"question = \"What is Task Decomposition?\"\n",
"ai_msg_1 = rag_chain.invoke({\"input\": question, \"chat_history\": chat_history})\n",
"chat_history.extend([HumanMessage(content=question), ai_msg_1[\"answer\"]])\n",
"\n",
"second_question = \"What are common ways of doing it?\"\n",
"ai_msg_2 = rag_chain.invoke({\"input\": second_question, \"chat_history\": chat_history})\n",
"\n",
"print(ai_msg_2[\"answer\"])"
]
},
{
"cell_type": "markdown",
"id": "53263a65-4de2-4dd8-9291-6a8169ab6f1d",
"metadata": {},
"source": [
":::{.callout-tip}\n",
"\n",
"Check out the [LangSmith trace](https://smith.langchain.com/public/243301e4-4cc5-4e52-a6e7-8cfe9208398d/r) \n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "68ff9dd9-8492-418e-a889-c4d670c47d0b",
"metadata": {},
"source": [
"### Returning sources"
]
},
{
"cell_type": "markdown",
"id": "eff75637-ac5c-41c7-955a-8f58f9d2a89e",
"metadata": {},
"source": [
"Often in Q&A applications it's important to show users the sources that were used to generate the answer. LangChain's built-in `create_retrieval_chain` will propagate retrieved source documents through to the output in the `\"context\"` key:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "dc0eeca7-883a-4d67-a9fc-b0fa746e3bb2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.\\nTask decomposition can be done (1) by LLM with simple prompting like \"Steps for XYZ.\\\\n1.\", \"What are the subgoals for achieving XYZ?\", (2) by using task-specific instructions; e.g. \"Write a story outline.\" for writing a novel, or (3) with human inputs.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
"\n",
"page_content='Fig. 1. Overview of a LLM-powered autonomous agent system.\\nComponent One: Planning#\\nA complicated task usually involves many steps. An agent needs to know what they are and plan ahead.\\nTask Decomposition#\\nChain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the models thinking process.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
"\n",
"page_content='Resources:\\n1. Internet access for searches and information gathering.\\n2. Long Term memory management.\\n3. GPT-3.5 powered Agents for delegation of simple tasks.\\n4. File output.\\n\\nPerformance Evaluation:\\n1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.\\n2. Constructively self-criticize your big-picture behavior constantly.\\n3. Reflect on past decisions and strategies to refine your approach.\\n4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
"\n",
"page_content='Fig. 11. Illustration of how HuggingGPT works. (Image source: Shen et al. 2023)\\nThe system comprises of 4 stages:\\n(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.\\nInstruction:' metadata={'source': 'https://lilianweng.github.io/posts/2023-06-23-agent/'}\n",
"\n"
]
}
],
"source": [
"for document in ai_msg_2[\"context\"]:\n",
" print(document)\n",
" print()"
]
},
{
"cell_type": "markdown",
"id": "0ab1ded4-76d9-453f-9b9b-db9a4560c737",
"metadata": {},
"source": [
"## Tying it together"
]
},
{
"cell_type": "markdown",
"id": "8a08a5ea-df5b-4547-93c6-2a3940dd5c3e",
"metadata": {},
"source": [
"\n",
"![](../../../static/img/conversational_retrieval_chain.png)\n",
"\n",
"Here we've gone over how to add application logic for incorporating historical outputs, but we're still manually updating the chat history and inserting it into each input. In a real Q&A application we'll want some way of persisting chat history and some way of automatically inserting and updating it.\n",
"\n",
"For this we can use:\n",
"\n",
"- [BaseChatMessageHistory](/docs/modules/memory/chat_messages/): Store chat history.\n",
"- [RunnableWithMessageHistory](/docs/expression_language/how_to/message_history): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.\n",
"\n",
"For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/expression_language/how_to/message_history) LCEL page.\n",
"\n",
"Below, we implement a simple example of the second option, in which chat histories are stored in a simple dict.\n",
"\n",
"For convenience, we tie together all of the necessary steps in a single code cell:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "71c32048-1a41-465f-a9e2-c4affc332fd9",
"metadata": {},
"outputs": [],
"source": [
"import bs4\n",
"from langchain import hub\n",
"from langchain.chains import create_history_aware_retriever, create_retrieval_chain\n",
"from langchain.chains.combine_documents import create_stuff_documents_chain\n",
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_community.document_loaders import WebBaseLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
"\n",
"\n",
"### Construct retriever ###\n",
"loader = WebBaseLoader(\n",
" web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
" bs_kwargs=dict(\n",
" parse_only=bs4.SoupStrainer(\n",
" class_=(\"post-content\", \"post-title\", \"post-header\")\n",
" )\n",
" ),\n",
")\n",
"docs = loader.load()\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"splits = text_splitter.split_documents(docs)\n",
"vectorstore = Chroma.from_documents(documents=splits, embedding=OpenAIEmbeddings())\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"\n",
"### Contextualize question ###\n",
"contextualize_q_system_prompt = \"\"\"Given a chat history and the latest user question \\\n",
"which might reference context in the chat history, formulate a standalone question \\\n",
"which can be understood without the chat history. Do NOT answer the question, \\\n",
"just reformulate it if needed and otherwise return it as is.\"\"\"\n",
"contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", contextualize_q_system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"history_aware_retriever = create_history_aware_retriever(\n",
" llm, retriever, contextualize_q_prompt\n",
")\n",
"\n",
"\n",
"### Answer question ###\n",
"qa_system_prompt = \"\"\"You are an assistant for question-answering tasks. \\\n",
"Use the following pieces of retrieved context to answer the question. \\\n",
"If you don't know the answer, just say that you don't know. \\\n",
"Use three sentences maximum and keep the answer concise.\\\n",
"\n",
"{context}\"\"\"\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", qa_system_prompt),\n",
" MessagesPlaceholder(\"chat_history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
"\n",
"rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)\n",
"\n",
"\n",
"### Statefully manage chat history ###\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"conversational_rag_chain = RunnableWithMessageHistory(\n",
" rag_chain,\n",
" get_session_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"chat_history\",\n",
" output_messages_key=\"answer\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6d0a7a73-d151-47d9-9e99-b4f3291c0322",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents or models handle difficult tasks by dividing them into more manageable subtasks. It can be achieved through methods like Chain of Thought (CoT) or Tree of Thoughts, which guide the model in thinking step by step or exploring multiple reasoning possibilities at each step.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_rag_chain.invoke(\n",
" {\"input\": \"What is Task Decomposition?\"},\n",
" config={\n",
" \"configurable\": {\"session_id\": \"abc123\"}\n",
" }, # constructs a key \"abc123\" in `store`.\n",
")[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "17021822-896a-4513-a17d-1d20b1c5381c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Task decomposition can be done in common ways such as using Language Model (LLM) with simple prompting, task-specific instructions, or human inputs. For example, LLM can be guided with prompts like \"Steps for XYZ\" to break down tasks, or specific instructions like \"Write a story outline\" can be given for task decomposition. Additionally, human inputs can also be utilized to decompose tasks into smaller, more manageable steps.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversational_rag_chain.invoke(\n",
" {\"input\": \"What are common ways of doing it?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")[\"answer\"]"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}