You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/cookbook/azure_container_apps_dynami...

827 lines
242 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "4153b116-206b-40f8-a684-bf082c5ebcea",
"metadata": {},
"source": [
"# Building a data analyst agent with LangGraph and Azure Container Apps dynamic sessions\n",
"\n",
"In this example we'll build an agent that can query a Postgres database and run Python code to analyze the retrieved data. We'll use [LangGraph](https://langchain-ai.github.io/langgraph/) for agent orchestration and [Azure Container Apps dynamic sessions](https://python.langchain.com/v0.2/docs/integrations/tools/azure_dynamic_sessions/) for safe Python code execution.\n",
"\n",
"**NOTE**: Building LLM systems that interact with SQL databases requires executing model-generated SQL queries. There are inherent risks in doing this. Make sure that your database connection permissions are always scoped as narrowly as possible for your agent's needs. This will mitigate though not eliminate the risks of building a model-driven system. For more on general security best practices, see our [security guidelines](https://python.langchain.com/v0.2/docs/security/)."
]
},
{
"cell_type": "markdown",
"id": "3b70c2be-1141-4107-80db-787f7935102f",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Let's get set up by installing our Python dependencies and setting our OpenAI credentials, Azure Container Apps sessions pool endpoint, and our SQL database connection string.\n",
"\n",
"### Install dependencies"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "302f827f-062c-4b83-8239-07b28bfc9651",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langgraph langchain-azure-dynamic-sessions langchain-openai langchain-community pandas matplotlib"
]
},
{
"cell_type": "markdown",
"id": "7621655b-605c-4690-8ee1-77a4bab8b383",
"metadata": {},
"source": [
"### Set credentials\n",
"\n",
"By default this demo uses:\n",
"- Azure OpenAI for the model: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource\n",
"- Azure PostgreSQL for the db: https://learn.microsoft.com/en-us/cli/azure/postgres/server?view=azure-cli-latest#az-postgres-server-create\n",
"- Azure Container Apps dynamic sessions for code execution: https://learn.microsoft.com/en-us/azure/container-apps/sessions-code-interpreter?\n",
"\n",
"This LangGraph architecture can also be used with any other [tool-calling LLM](https://python.langchain.com/v0.2/docs/how_to/tool_calling/) and any SQL database."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "be7c74d8-485b-4c51-aded-07e8af838efe",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Azure OpenAI API key ········\n",
"Azure OpenAI endpoint ········\n",
"Azure OpenAI deployment name ········\n",
"Azure Container Apps dynamic sessions pool management endpoint ········\n",
"PostgreSQL connection string ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\"Azure OpenAI API key\")\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = getpass.getpass(\"Azure OpenAI endpoint\")\n",
"\n",
"AZURE_OPENAI_DEPLOYMENT_NAME = getpass.getpass(\"Azure OpenAI deployment name\")\n",
"SESSIONS_POOL_MANAGEMENT_ENDPOINT = getpass.getpass(\n",
" \"Azure Container Apps dynamic sessions pool management endpoint\"\n",
")\n",
"SQL_DB_CONNECTION_STRING = getpass.getpass(\"PostgreSQL connection string\")"
]
},
{
"cell_type": "markdown",
"id": "3712a7b0-3f7d-4d90-9319-febf7b046aa6",
"metadata": {},
"source": [
"### Imports"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "09c0a46e-a8b4-44e3-8d90-2e5d0f66c1ad",
"metadata": {},
"outputs": [],
"source": [
"import ast\n",
"import base64\n",
"import io\n",
"import json\n",
"import operator\n",
"from functools import partial\n",
"from typing import Annotated, List, Literal, Optional, Sequence, TypedDict\n",
"\n",
"import pandas as pd\n",
"from IPython.display import display\n",
"from langchain_azure_dynamic_sessions import SessionsPythonREPLTool\n",
"from langchain_community.utilities import SQLDatabase\n",
"from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.tools import tool\n",
"from langchain_openai import AzureChatOpenAI\n",
"from langgraph.graph import END, StateGraph\n",
"from langgraph.prebuilt import ToolNode\n",
"from matplotlib.pyplot import imshow\n",
"from PIL import Image"
]
},
{
"cell_type": "markdown",
"id": "5cc14582-313c-4a61-be5e-a7a1ba26a6e0",
"metadata": {},
"source": [
"## Instantiate model, DB, code interpreter\n",
"\n",
"We'll use the LangChain [SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase) interface to connect to our DB and query it. This works with any SQL database supported by [SQLAlchemy](https://www.sqlalchemy.org/)."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "9262ea34-c6ac-407c-96c3-aa5eaa1a8039",
"metadata": {},
"outputs": [],
"source": [
"db = SQLDatabase.from_uri(SQL_DB_CONNECTION_STRING)"
]
},
{
"cell_type": "markdown",
"id": "1982c6f2-aa4e-4842-83f2-951205aa0854",
"metadata": {},
"source": [
"For our LLM we need to make sure that we use a model that supports [tool-calling](https://python.langchain.com/v0.2/docs/how_to/tool_calling/)."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "ba6201a1-d760-45f1-b14a-bf8d85ceb775",
"metadata": {},
"outputs": [],
"source": [
"llm = AzureChatOpenAI(\n",
" deployment_name=AZURE_OPENAI_DEPLOYMENT_NAME, openai_api_version=\"2024-02-01\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "92e2fcc7-812a-4d18-852f-2f814559b415",
"metadata": {},
"source": [
"And the [dynamic sessions tool](https://python.langchain.com/v0.2/docs/integrations/tools/azure_container_apps_dynamic_sessions/) is what we'll use for code execution."
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "89e5a315-c964-493d-84fb-1f453909caae",
"metadata": {},
"outputs": [],
"source": [
"repl = SessionsPythonREPLTool(\n",
" pool_management_endpoint=SESSIONS_POOL_MANAGEMENT_ENDPOINT\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ee084fbd-10d3-4328-9d8c-75ffa9437b31",
"metadata": {},
"source": [
"## Define graph\n",
"\n",
"Now we're ready to define our application logic. The core elements are the [agent State, Nodes, and Edges](https://langchain-ai.github.io/langgraph/concepts/#core-design).\n",
"\n",
"### Define State\n",
"We'll use a simple agent State which is just a list of messages that every Node can append to:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "7feef65d-bf11-41bb-9164-5249953eb02e",
"metadata": {},
"outputs": [],
"source": [
"class AgentState(TypedDict):\n",
" messages: Annotated[Sequence[BaseMessage], operator.add]"
]
},
{
"cell_type": "markdown",
"id": "58fe92a3-9a30-464b-bcf3-972af5b92e40",
"metadata": {},
"source": [
"Since our code interpreter can return results like base64-encoded images which we don't want to pass back to the model, we'll create a custom Tool message that allows us to track raw Tool outputs without sending them back to the model."
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "36e2d8a2-8881-40bc-81da-b40e8a152d9d",
"metadata": {},
"outputs": [],
"source": [
"class RawToolMessage(ToolMessage):\n",
" \"\"\"\n",
" Customized Tool message that lets us pass around the raw tool outputs (along with string contents for passing back to the model).\n",
" \"\"\"\n",
"\n",
" raw: dict\n",
" \"\"\"Arbitrary (non-string) tool outputs. Won't be sent to model.\"\"\"\n",
" tool_name: str\n",
" \"\"\"Name of tool that generated output.\"\"\""
]
},
{
"cell_type": "markdown",
"id": "ad1b681c-c918-4dfe-b671-9d6eee457a51",
"metadata": {},
"source": [
"### Define Nodes"
]
},
{
"cell_type": "markdown",
"id": "966aeec1-b930-442c-9ba3-d8ad3800d2a4",
"metadata": {},
"source": [
"First we'll define a node for calling our model. We need to make sure to bind our tools to the model so that it knows to call them. We'll also specify in our prompt the schema of the SQL tables the model has access to, so that it can write relevant SQL queries."
]
},
{
"cell_type": "markdown",
"id": "88f15581-11f6-4421-aa17-5762a84c8032",
"metadata": {},
"source": [
"We'll use our models tool-calling abilities to reliably generate our SQL queries and Python code. To do this we need to define schemas for our tools that the model can use for structuring its tool calls.\n",
"\n",
"Note that the class names, docstrings, and attribute typing and descriptions are crucial here, as they're actually passed in to the model (you can effectively think of them as part of the prompt)."
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "390f170b-ba13-41fc-8c9b-ee0efdb13b98",
"metadata": {},
"outputs": [],
"source": [
"# Tool schema for querying SQL db\n",
"class create_df_from_sql(BaseModel):\n",
" \"\"\"Execute a PostgreSQL SELECT statement and use the results to create a DataFrame with the given colum names.\"\"\"\n",
"\n",
" select_query: str = Field(..., description=\"A PostgreSQL SELECT statement.\")\n",
" # We're going to convert the results to a Pandas DataFrame that we pass\n",
" # to the code intepreter, so we also have the model generate useful column and\n",
" # variable names for this DataFrame that the model will refer to when writing\n",
" # python code.\n",
" df_columns: List[str] = Field(\n",
" ..., description=\"Ordered names to give the DataFrame columns.\"\n",
" )\n",
" df_name: str = Field(\n",
" ..., description=\"The name to give the DataFrame variable in downstream code.\"\n",
" )\n",
"\n",
"\n",
"# Tool schema for writing Python code\n",
"class python_shell(BaseModel):\n",
" \"\"\"Execute Python code that analyzes the DataFrames that have been generated. Make sure to print any important results.\"\"\"\n",
"\n",
" code: str = Field(\n",
" ...,\n",
" description=\"The code to execute. Make sure to print any important results.\",\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "a98cf69a-e25b-4016-a565-aa16e43e417a",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = f\"\"\"\\\n",
"You are an expert at PostgreSQL and Python. You have access to a PostgreSQL database \\\n",
"with the following tables\n",
"\n",
"{db.table_info}\n",
"\n",
"Given a user question related to the data in the database, \\\n",
"first get the relevant data from the table as a DataFrame using the create_df_from_sql tool. Then use the \\\n",
"python_shell to do any analysis required to answer the user question.\"\"\"\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", system_prompt),\n",
" (\"placeholder\", \"{messages}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"def call_model(state: AgentState) -> dict:\n",
" \"\"\"Call model with tools passed in.\"\"\"\n",
" messages = []\n",
"\n",
" chain = prompt | llm.bind_tools([create_df_from_sql, python_shell])\n",
" messages.append(chain.invoke({\"messages\": state[\"messages\"]}))\n",
"\n",
" return {\"messages\": messages}"
]
},
{
"cell_type": "markdown",
"id": "4e87c72e-7f9e-4377-94c9-abd9fb869866",
"metadata": {},
"source": [
"Now we can define the node for executing any SQL queries that were generated by the model. Notice that after we run the query we convert the results into Pandas DataFrames — these will be uploaded the the code interpreter tool in the next step so that it can use the retrieved data."
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "a229efba-e981-4403-a37c-ab030c929ea4",
"metadata": {},
"outputs": [],
"source": [
"def execute_sql_query(state: AgentState) -> dict:\n",
" \"\"\"Execute the latest SQL queries.\"\"\"\n",
" messages = []\n",
"\n",
" for tool_call in state[\"messages\"][-1].tool_calls:\n",
" if tool_call[\"name\"] != \"create_df_from_sql\":\n",
" continue\n",
"\n",
" # Execute SQL query\n",
" res = db.run(tool_call[\"args\"][\"select_query\"], fetch=\"cursor\").fetchall()\n",
"\n",
" # Convert result to Pandas DataFrame\n",
" df_columns = tool_call[\"args\"][\"df_columns\"]\n",
" df = pd.DataFrame(res, columns=df_columns)\n",
" df_name = tool_call[\"args\"][\"df_name\"]\n",
"\n",
" # Add tool output message\n",
" messages.append(\n",
" RawToolMessage(\n",
" f\"Generated dataframe {df_name} with columns {df_columns}\", # What's sent to model.\n",
" raw={df_name: df},\n",
" tool_call_id=tool_call[\"id\"],\n",
" tool_name=tool_call[\"name\"],\n",
" )\n",
" )\n",
"\n",
" return {\"messages\": messages}"
]
},
{
"cell_type": "markdown",
"id": "7a67eaaf-1587-4f32-ab5c-e1a04d273c3e",
"metadata": {},
"source": [
"Now we need a node for executing any model-generated Python code. The key steps here are:\n",
"- Uploading queried data to the code intepreter\n",
"- Executing model generated code\n",
"- Parsing results so that images are displayed and not passed in to future model calls\n",
"\n",
"To upload the queried data to the model we can take our DataFrames we generated by executing the SQL queries and upload them as CSVs to our code intepreter."
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "450c1dd0-4fe4-4ab7-b1d7-e012c3cf0102",
"metadata": {},
"outputs": [],
"source": [
"def _upload_dfs_to_repl(state: AgentState) -> str:\n",
" \"\"\"\n",
" Upload generated dfs to code intepreter and return code for loading them.\n",
"\n",
" Note that code intepreter sessions are short-lived so this needs to be done\n",
" every agent cycle, even if the dfs were previously uploaded.\n",
" \"\"\"\n",
" df_dicts = [\n",
" msg.raw\n",
" for msg in state[\"messages\"]\n",
" if isinstance(msg, RawToolMessage) and msg.tool_name == \"create_df_from_sql\"\n",
" ]\n",
" name_df_map = {name: df for df_dict in df_dicts for name, df in df_dict.items()}\n",
"\n",
" # Data should be uploaded as a BinaryIO.\n",
" # Files will be uploaded to the \"/mnt/data/\" directory on the container.\n",
" for name, df in name_df_map.items():\n",
" buffer = io.StringIO()\n",
" df.to_csv(buffer)\n",
" buffer.seek(0)\n",
" repl.upload_file(data=buffer, remote_file_path=name + \".csv\")\n",
"\n",
" # Code for loading the uploaded files.\n",
" df_code = \"import pandas as pd\\n\" + \"\\n\".join(\n",
" f\"{name} = pd.read_csv('/mnt/data/{name}.csv')\" for name in name_df_map\n",
" )\n",
" return df_code\n",
"\n",
"\n",
"def _repl_result_to_msg_content(repl_result: dict) -> str:\n",
" \"\"\"\n",
" Display images with including them in tool message content.\n",
" \"\"\"\n",
" content = {}\n",
" for k, v in repl_result.items():\n",
" # Any image results are returned as a dict of the form:\n",
" # {\"type\": \"image\", \"base64_data\": \"...\"}\n",
" if isinstance(repl_result[k], dict) and repl_result[k][\"type\"] == \"image\":\n",
" # Decode and display image\n",
" base64_str = repl_result[k][\"base64_data\"]\n",
" img = Image.open(io.BytesIO(base64.decodebytes(bytes(base64_str, \"utf-8\"))))\n",
" display(img)\n",
" else:\n",
" content[k] = repl_result[k]\n",
" return json.dumps(content, indent=2)\n",
"\n",
"\n",
"def execute_python(state: AgentState) -> dict:\n",
" \"\"\"\n",
" Execute the latest generated Python code.\n",
" \"\"\"\n",
" messages = []\n",
"\n",
" df_code = _upload_dfs_to_repl(state)\n",
" last_ai_msg = [msg for msg in state[\"messages\"] if isinstance(msg, AIMessage)][-1]\n",
" for tool_call in last_ai_msg.tool_calls:\n",
" if tool_call[\"name\"] != \"python_shell\":\n",
" continue\n",
"\n",
" generated_code = tool_call[\"args\"][\"code\"]\n",
" repl_result = repl.execute(df_code + \"\\n\" + generated_code)\n",
"\n",
" messages.append(\n",
" RawToolMessage(\n",
" _repl_result_to_msg_content(repl_result),\n",
" raw=repl_result,\n",
" tool_call_id=tool_call[\"id\"],\n",
" tool_name=tool_call[\"name\"],\n",
" )\n",
" )\n",
" return {\"messages\": messages}"
]
},
{
"cell_type": "markdown",
"id": "dd530250-60b6-40fb-b1f8-2ff32967ecc8",
"metadata": {},
"source": [
"### Define Edges\n",
"\n",
"Now we're ready to put all the pieces together into a graph."
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "a04e0a82-1c3e-46d3-95ea-2461c21202ef",
"metadata": {},
"outputs": [],
"source": [
"def should_continue(state: AgentState) -> str:\n",
" \"\"\"\n",
" If any Tool messages were generated in the last cycle that means we need to call the model again to interpret the latest results.\n",
" \"\"\"\n",
" return \"execute_sql_query\" if state[\"messages\"][-1].tool_calls else END"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "b2857ba9-da80-443f-8217-ac0523f90593",
"metadata": {},
"outputs": [],
"source": [
"workflow = StateGraph(AgentState)\n",
"\n",
"workflow.add_node(\"call_model\", call_model)\n",
"workflow.add_node(\"execute_sql_query\", execute_sql_query)\n",
"workflow.add_node(\"execute_python\", execute_python)\n",
"\n",
"workflow.set_entry_point(\"call_model\")\n",
"workflow.add_edge(\"execute_sql_query\", \"execute_python\")\n",
"workflow.add_edge(\"execute_python\", \"call_model\")\n",
"workflow.add_conditional_edges(\"call_model\", should_continue)\n",
"\n",
"app = workflow.compile()"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "74dc8c6c-b520-4f17-88ec-fa789ed911e6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" +-----------+ \n",
" | __start__ | \n",
" +-----------+ \n",
" * \n",
" * \n",
" * \n",
" +------------+ \n",
" ...| call_model |*** \n",
" ....... +------------+ ******* \n",
" ........ .. ... ******* \n",
" ....... .. ... ****** \n",
" .... .. .. ******* \n",
"+---------+ +-------------------+ .. **** \n",
"| __end__ | | execute_sql_query | . **** \n",
"+---------+ +-------------------+* . **** \n",
" ***** . ***** \n",
" **** . **** \n",
" *** . *** \n",
" +----------------+ \n",
" | execute_python | \n",
" +----------------+ \n"
]
}
],
"source": [
"print(app.get_graph().draw_ascii())"
]
},
{
"cell_type": "markdown",
"id": "6d4e079b-0cf8-4f9d-a52b-6a8f980eee4b",
"metadata": {},
"source": [
"## Test it out\n",
"\n",
"Replace these examples with questions related to the database you've connected your agent to."
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "2c173d6d-a212-448e-b309-299e87f205b8",
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAA90AAAJOCAYAAACqS2TfAACw/klEQVR4Ae3dCbxM9f/H8Y8WW4lKRIhsJe17ISollahoU9o3UpGKNkpIC5XSLy3SgkpFq1YtpBJCipJCSFJIJeX8z/vb/zudu+AOM/fOufP6Ph73zsyZM2fOeZ4zM+dzvt/v51siCItREEAAAQQQQAABBBBAAAEEEEAg5QKbpXyJLBABBBBAAAEEEEAAAQQQQAABBJwAQTcHAgIIIIAAAggggAACCCCAAAJpEiDoThMsi0UAAQQQQAABBBBAAAEEEECAoJtjAAEEEEAAAQQQQAABBBBAAIE0CRB0pwmWxSKAAAIIIIAAAggggAACCCBA0M0xgAACCCCAAAIIIIAAAggggECaBAi60wTLYhFAAAEEEEAAAQQQQAABBBAg6OYYQAABBBBAAAEEEEAAAQQQQCBNAgTdaYJlsQgggAACCCCAAAIIIIAAAggQdHMMIIAAAggUisC9995rJUqUsIYNGxbK+8XpTZo2bZoyl5kzZ1rPnj3tu+++ixPBOtc1lTbrfJPwiZo1a7rjU++XXxk2bJh7XsfwuHHj8ptlo6ZpX2mZG1POOecct94b81pegwACCCBQeAIE3YVnzTshgAACWS3w6KOPuu3/4osv7OOPP85qi3RuvILuXr16FZugO51WuZddrlw5e//9923OnDm5nzIdv9tss02e6UxAAAEEEEBgQwIE3RsS4nkEEEAAgU0WmDRpkn3++ed23HHHuWU98sgjm7zMZBcQBIH98ccfyb6M+bNIoFGjRrbTTju5ADu62QrCFYyfeuqp0cncRwABBBBAoEACBN0FYmImBBBAAIFNEfBBdr9+/ezQQw+1ESNG2O+//+4WuWbNGqtUqZKdddZZed7i119/tTJlyliXLl0Sz61YscKuvvpqq1WrlpUsWdIFSVdeeaWtWrUqMY/uqMlup06d7MEHH7TddtvNSpUqZY8//ribRzXBBx10kG233Xau9nLfffc1raMC82hZvXq1de3a1XbccUcrW7asNWnSxD777DPXpFdNe6Nl8eLFdvHFF1u1atXcemn99D5///13dLaNvq8LF6eddpp7b5nUDJtDn3766fb9998nljl06FBr27ate9ysWbNEc2hN9+Wtt96yI4880m23tumwww6zt99+2z/tbn2TZ7VK0HuUL1/eKleubOedd54tX748x7xr1661++67z/bee2+3rypUqGAHH3ywjRkzxs13/vnnO2e/v6MvPuKII2z33XePTlrn/Q8++MAtV9uuwPjGG2+0f/75x82v/Va3bl075phj8rz+t99+c+vfsWPHPM/lnrDZZpvZ2Wef7Y4TbZcvquWuXr26HXXUUX5Sjltt6yGHHOKOEdWWN2/e3D766KMc8+jBK6+84px0LOr4uPPOO/PMownangceeCBhuu2229opp5xi3377bb7zMxEBBBBAILMFCLoze/+wdggggEDsBVS7PHz4cDvggANcv2UFbitXrrRnn33WbduWW25p7du3t1GjRpkC6mjR6/78808799xz3WQFbocffrgLijp37myvvfaaXXvttaagslWrVnmC5hdffNEGDx5sN910k40dO9YaN27slqP+zgqQn3nmGXv++eftpJNOsssvv9xuvfXW6Nu79x04cKC7HT16tJ188snWpk0b08WAaFHAfeCBB7r30HtpvRRs9u3b1y688MLorBt9X+tcv3590/poW26//XZbtGiRc126dKlbrloS9OnTx92///77XeCn4M+3MHjyySft6KOPdgG3LkBo+3XhQcFq7sBbC9H21qtXz+2b6667zp5++mm76qqr3PL9P118uOKKK9x6jBw50l1Q0b7Q+qrouV9++cW91k34/39qBv/uu+9aQYJh+eqCw5lnnmnaDwpAe/fu7ZatxekCi/bfm2++aV9//XX0bUx9sXVcFeR99EIdnwsXLnTGeqzAXlbaTgXluYtMTjzxRGeq41UXb7S9TZs2tQ8//DAxu3w1n4JyXXS64447nP9jjz2WmMff0bGpC0kK8nUMKwDXBRBdsPrxxx/9bNwigAACCMRFILyaSkEAAQQQQCBtAmHQo+rjIKxxdu8RBtzB1ltvHYQBcOI9p02b5uZ56KGHEtN0Jwxkg/322y8xLQxigzDwCT799NPENN157rnn3OtfffXVxHS9Z1hDGyxbtiwxLb87YVAVhLXtwS233BJsv/32QVjD6WYLgxy3zDCoz/GyMLBy0zt06JCYHgZJbpvCWufENN0JazLdvFrW+kp4ISEIa3zXN0ue58Ia9CCsxQ222mqr4J577kk8H17McO8ZBrSJaboTtgQIwgA7OOGEE3JM1/bvtddezto/cfPNN7tl9O/f309yt5dddllQunTphFHY5NrNd/311+eYL/cDbV9YE55j8qWXXhqEfaQDHQ/rK3qt9mUYbOeYLbyY4Y4Fbx4G1kEY0AZhkJ9jvgYNGgRhrX+Oafk92HnnnYPw4oR7Su8ZBvbuflg7HYRBfTB37twgt63sqlatGuyxxx6B7vuibQpbbwRhkOwnBWHLCjdveBEqMU3rrH2i7fMlvEjiHt91111+krudP39+ENbyB9dcc01iuo5BrTcFAQQQQCCzBfJesg2/+SkIIIAAAgikSkA1f2oSrJpKlTDgdk2g1VzY10qGQYuFwbVFa/2+/PJL++STT1zNo1+Xl19+2dWWqymzmm37P9XUqrYzd1ZpNV9W09zc5Z133nG1iGo2vfnmm5tq21VD/fPPP9uSJUvc7O+99567bdeuXY6Xq5Z1iy22yDFN66Xm3GEAllgnrduxxx6bY1k5XpTkAzWTVq1+nTp13PtrHWSpZvWy2lCZMGGChRcgLAzUcqyjmlG3aNHCwgsZeZroq8Y6Wvbcc0/X8sAbqUZfZUO1yKrtnjp1qo0fP97Nr5rnJ554wq2LtmFDRbXDudfljDPOMK27+lqraB61iFCrB9/VQPtZNerqZpBMUW23mozreNDxq32r5vy5y6xZs1ytuLpGRGvBtU1qJTBx4kTXjULrI1+1qAgvWiQWo3UOL4IkHuuOjiUdy2r94Y9v3aqLQ3hxJM8xnuPFPEAAAQQQyEgBgu6M3C2sFAIIIFA8BL755hsXFKl5c3gN2jXLVtNsBa4qPqO57ivQUVPor776Sg9dAK6+r+pT7Iua1oa14i5IVqDs/xS8aPm+mbWfv0qVKv5u4laBvJpYqwwZMsQFggqIwtpaN80nW1PApaK+zNGiYDesEY9Ock1+X3rppcT6+PXy/ZVzr1eOFxfwgYLMQYMG2QUXXOCaPms7tN477LBDgRLE+WbJsvfr52/VVF1+CsqjJfd2an+oeKOffvrJXbRQQLi+ombVClrV5F3FB8YbCtb9MnPvA0337+n3k6apibm6Ljz11FN66LzUx17vn0yRkYLjAQMGmParugrkV/x753ec6QKMLgqoqbn+dN+vc3RZuadpP2lfaJv9/vG3CuJTcSxF35/7CCCAAALpF8h5qT7978c7IIAAAghkkYCCagUQYfNv95d709VXVn1zVdus4FoJ0xSQ3Xbbba4mtHXr1jlqqitWrOhqzaPBenSZej5aVGOYu6g/rYIY1ShGax3VdzZafMCpIEiJu3xRraMPtvw0va9qgbXe+RUFYJtSlLxM6xs2+zb1rfZFid5yB8r+udy33kZJz5ToLL+SX3Cb33x+mgL+sFm1qc91foGnn0+1wAqwe/ToYWGzaddHWcnc1Ee9IMVfMIjOq/dU8ftJ99UKQK0LFNzrVrXVSman4yuZogRzapmhPvkaJkw11PkV/97qW5+7qF+4tlstLfQZ0LHo1zk6b+5p2k+aVy1B/EWO6Pz5TYs+z30EEEAAgcwTIOjOvH3CGiGAAALFQkDBmILq2rVr28MPP5xnmxREKgBTE+Xjjz/eBScKspX4SpmgFYyo9jtaNJ8ShSnYUfbnjSkKaFRbHQ3EVHOr5s7RokzlKkoOpuzmvugCggLvaNF6hf3J3bbm15w9Ou/G3Nc6K3DLHXDJVc7R4ufxtdH+OWUpV2bxjWlu7Ze
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=989x590>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The graph of the average latency by model has been generated successfully. However, it seems that the output is not displayed here directly. To view the graph, you would typically run the provided Python code in an environment where graphical output is supported, such as a Jupyter notebook or a Python script executed in a local environment with access to a display server.\n"
]
}
],
"source": [
"output = app.invoke({\"messages\": [(\"human\", \"graph the average latency by model\")]})\n",
"print(output[\"messages\"][-1].content)"
]
},
{
"cell_type": "markdown",
"id": "a67fbc65-2161-4518-9eea-f0cdd99b5f59",
"metadata": {},
"source": [
"**LangSmith Trace**: https://smith.langchain.com/public/9c8afcce-0ed1-4fb1-b719-767e6432bd8e/r"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "1d512f95-7490-483e-a748-abf708fbd20c",
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAjsAAAHFCAYAAAAUpjivAAB6W0lEQVR4Ae2dB5wURdrGX1iWzJKjkiSIJEVQgkgOooioJyqegnCKh6CICIL6CQYwRxTPiHIi3omYQBQORRADAkoQERUJkhGWtCypv/cprband3Z2Ztkw4Sl+y3RXV1dX/aun+5m33qoq4GgQBhIgARIgARIgARKIUwIF47RerBYJkAAJkAAJkAAJGAIUO7wRSIAESIAESIAE4poAxU5cNy8rRwIkQAIkQAIkQLHDe4AESIAESIAESCCuCVDsxHXzsnIkQAIkQAIkQAIUO7wHSIAESIAESIAE4poAxU5cNy8rRwIkQAIkQAIkQLHDe4AESIAESIAESCCuCVDsxHXzsnIkQAIkQAIkQAIUOzF2D0yePFkKFCjg/hUqVEiqVq0qV1xxhaxduzZbtfn0009NfviMNHz//fcyduxY+fXXXzOc2r9/f6lVq1aG+LyOQDlKliwZ1mXBFvXJqYC8kOfOnTtzKkuZNWtWjpYxxwqWgxl573Fsly5dWjp06CAzZ87MwavkbVZTp06VJ554IsuL+r/jfhZ2P5zvls3rm2++yfK6iZYA9xP+sgrg3LNnz6yShXV80aJF5ru7Z8+esNIzUc4RoNjJOZZ5mtMrr7wiX3zxhcydO1eGDBki7733nrRt21Z2796dp+WA2Bk3blxQsXPXXXfJjBkz8rQ8J3oxMP3HP/5xotnk6vkQO2Ae7+Fvf/ubucc///xzeeaZZ2Tr1q1y4YUXxqzgCVfsXHDBBabeuBftH9ra8rBxsfbdivf7NZz6Qezgu0uxEw6tnE1TKGezY255RaBx48bSokULczn8Ojl27Jjcfffd8s4778i1116bV8UIeZ06deqEPB6NB1u1ahWNxUrIMlWuXFlse7Rp00Zat24tdevWNdYRCIJg4ciRI8aSBotnrIaKFSsK/vzBy8N/jPskQAKhCdCyE5pPzBy1wmfbtm0BZYb5ulevXlKuXDkpWrSoNGvWTP7zn/8EpAm2g/PQNQYTbrFixcznlVdeKevXr3eTw0R+2WWXmf2OHTu6XWuIRwjWjXXo0CEZPXq01K5dWwoXLiwnnXSS3HjjjRl+6VjT8ezZs+XMM880ZWjQoIG8/PLLJm/738GDB2XEiBEmP9QP9QSLN954wyZxP3/66Sc5//zzTZdW9erV5dZbb5X09HT3ODbQRYCuJxtsN8CcOXOMiET+JUqUMBaGX375xSbL8nPjxo1yySWXSEpKiumS+fvf/y47duzIcN6bb75pXuq4BrreunfvLsuWLXPTgSmsHAi2OwOf6EZEWzRq1MhNiw1YQnD8v//9rxu/dOlSE/f++++7cbCaDBo0SE4++WTTLmgf/AI9evSomwYbhw8flvvuu0/QFkWKFDEvZYhrf13Cbb+AzLPYgXiGCLD3oO1+nTJlimlL3EsoE9oZAffK6aefbu57tNvFF18sq1evDriK7eL84YcfDGtwR7fwAw88YNJ9+eWXxmKK+Pr168urr74acH649wd+kKALDmX3tltAZhHuLFy4UDp37iylSpWS4sWLCwRhON18W7ZskebNm0u9evXcru+9e/e63yP7vRw2bJgcOHAgoFQoOyzJYH7aaaeZ64LxBx98EJAO98P1118v+J7Z++Scc84xluiAhL4dtB3uJ5QNdUKb4h5esWJFQErb9vie33HHHVKtWjXz3erSpYusWbMmIC3Wun7ooYekZs2a5l7A8+TDDz8MSHOiO3g+XHTRReb7g+cQRDm+T97uazxXbrvtNnMpfL/sfeB1H8jq+4+T7T0bzvMMz7d77rnHtBXKVb58ecGzGhYmBNw/+C771wPHPuqQ2Y8Kc3Ks/aeVYoghAtp9hVXqncWLFweUeuLEiSZ++vTpbvy8efMcfXA55557rqNfIkeFg6NfFJMO+djwySefmDh82qAvR+f//u//HDWVO/Pnz3emTZvmtG/f3tGXjaMPMpNs+/btzvjx4825+gJ21Lxu/hCP0K9fP0cfMGYb/x0/ftzRl7ejv7od7eJyPv74Y+eRRx5x9EXiqAhzVAi5aXGevnidhg0bOq+99prz0UcfOfoyN9dCeWzQB4qjD0Xnsccec1B+feg6+qJynn76aZvElAMc9OFsrqddf6Zu+rBx9IXupsMG2KqFzI2zvPWh7QwYMMDRh6Tz/PPPO5UqVXIQp92GbtpgG8gLeaI++qAz9UBZbZ1VPLin3X///Q7KhOugHm+//baj1gyTdtWqVSadPuAc7c4weVre+AS75557zsRv3rzZpFUrh6MvQkfFqnPddde513nwwQdNG+gLzsTpy8/UBWX817/+5YDPvffe6+hLytwv9kS1HjrnnXeeKQ+46QPeefHFFx19IZl2UuFpk5r6htN+7gm+DTBTERwQ+/vvvzsFCxZ09KVu4tHeSIfrg4l25Rpuu3btcu9LFeiOCgBzD51yyimO+v44P/74o5sv7lF7bzz55JOmTvqyNfmqKHdU4DgvvfSSaTf12zDx+kPAPT/c+wPtpy97p0qVKu73BO0WbvDz0Bekk5yc7KhoMd9tteg63bp1M/cPvqs22PLZ54WKBtPWuK/s91gFjXPGGWc4FSpUMN8jtD9YgFWnTp3M99bmh3KokHXOPvtsR380Odql6qiQM/fTzz//bJOZ7zmeFfiuoKwoH54n3rK5iT0b+G7rjxDnrbfeMs8dPH969+5t7mEVpG5K2/Yoy1VXXWXaWIWPU6NGDUeFkqMi3U1rv4MDBw50v7+4Z9AWeKZlFfC90Jd+yGSTJk1yJkyYYO5B1EFFsaMi0Dn11FMd+x3XHzzO0KFDzT2E77b9/qamppq8w/n+I6H3nsXzM7PnGb7/KmxM2+gPQtNW+I6MGTPGASuEd99915QH32VvwHcGbY3PeAlQdAwxRMA+vPQXp4Obed++fUbE4Ivbrl07E2ero4rdiAik8wY8tPXXq4OXF4J9cOAzs4CHx/79+82LDg9CGyCK8KUIdi6+lHhQ2ACxhbT6K8tGmU8IMcTjwWgDztNfIo7+ErZRTlpamqO/0B0IHBu0O888DO1+sE+UA/nj4ewNauUxDyNvHNLh4WiD5a1WARtlPtWPxOSpVo6AeP+OfdDecsstAYdef/11c/6///1vE79hwwbzUMLD0BvQvmjbPn36uNEQASinP0AIIR7iEEF/+Zv9kSNHOvpL0k3etWtXVzAgEjzVihTAGvF4kCI/K7TwgMS+V1AjHV6kiH/22Wexa0K47WfT+z+R3+DBg839jJeFWmScHj16mOtAWCPgnkM63PfeAAEKgYf29QYwhoDr27evG23vDW+d8H3Bixp5qxXMTQsRlZSU5AwfPtyNi+T+wAvT+31wMwljA2Xxij/t3jOCG/eHDfiO4vsAkYkfFgi2fGgjvNDUsmiEIb5LNuAlDRFpBZGNh+DAdSFobMC+dqc5VigjXq2C5nzkYwPuJ7UM2d1sf6JOaH8IGO93yLa9v43xHUcZrZDEvYDnSGbf35wSO94Kgj3uITy7UBYIChsefvhhE7du3TobZT4j+f7bezar5xmeA7j+Cy+8EHAt7w7eAfgRoFYpb7T5rqkl1b2PAg7G6A67sfRuiMUAXwb9ZWdM2PprW8qWLSv6pRLrqwATJ0zz+qvHVE8fGqZLAp/oyoEp22/u9XJQYSOjRo0ypkzkiT90q8Cs7e8K8J4XalstTeYwzLDegO4XdBP873//80aL/toU/aXmxsEMi64E242BA/oL05ikb7/9doE5WB/ibnrvBkzGMId7Q9OmTQPy8h7zb1uONh5dBvriEn3o2qiQn/7zVbwYpvZ8tVyZ9rnmmmvcdkJboc76QDZ1C3kBPYhuHnQf6S89kxSm9SZNmgi6zPThKvrL23TbofsD5n4b0AUB0za6Arz3iYoLk0R/qZpPpCtTpozh6E2HdlJBlqGM4bSfLUOwTxVP5h5Htwq6TGB6h0leRVBA8ksvvTRgX1905j7w32foUlFLRYb7DPcGvhM24F6HCR/dWej2tQFdYWrRC3rP+Ns30vvDXiOcT3wHv/rqK+OwjO+
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=571x453>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The correlation coefficient between the number of prompt tokens and latency is approximately 0.305, indicating a positive but relatively weak relationship. This suggests that as the number of input tokens increases, there tends to be an increase in latency, but the relationship is not strong and other factors may also influence latency.\n",
"\n",
"Here is the scatter plot showing the relationship visually:\n",
"\n",
"![Scatter Plot of Prompt Tokens and Latency](sandbox:/2)\n"
]
}
],
"source": [
"output = app.invoke(\n",
" {\n",
" \"messages\": [\n",
" (\"human\", \"what's the relationship between latency and input tokens?\")\n",
" ]\n",
" }\n",
")\n",
"print(output[\"messages\"][-1].content)"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "10071b83-19c6-468d-b5fc-600b42cd57ac",
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAp4AAAHFCAYAAABb1/k6AAB6DklEQVR4Ae2dB5xVxfm/X6SDLIIgRQGxi2DFIIgIFtRYo4lYohj9WaKoRLFg4j9ojKiJJcaoKcYWWxJ7xIJREcSKhaISVFQQEEF6L+c/34lzc+7h7u7d5d7de+8+L5/lnjNnZs7MM3Nm3vNOOfUiJ4ZAAAIQgAAEIAABCEAgzwQ2yXP8RA8BCEAAAhCAAAQgAAFPAMWTigABCEAAAhCAAAQgUCMEUDxrBDM3gQAEIAABCEAAAhBA8aQOQAACEIAABCAAAQjUCAEUzxrBzE0gAAEIQAACEIAABFA8qQMQgAAEIAABCEAAAjVCAMWzRjBzEwhAAAIQgAAEIAABFE/qAAQgAAEIQAACEIBAjRBA8awRzNwEAhCAAAQgAAEIQKBKiuc999xj9erVS/01aNDAOnToYCeccIJNmzatWjRfeeUVH59+qyoffvihjRgxwj7//PMNgp522mm29dZbb+Be0w5Kx6abbprVbcVW+cmVKC7FOW/evFxFaaNGjcppGnOWsBxGFK/jOm7ZsqX179/fnnnmmRzepWajevDBB+2WW26p9KbJZzzJIpxn82yFuN55551K71vXPKg+6a8yEecjjjiiMm9ZXR8/frx/dhcuXJiV/2LwtGbNGttpp53suuuuSyU31DvV1Uz9ij7Wt9122/m2MZsySEWcxYHuqXa3qqI+TGGV9lKVupDHZNmpfumvOqJ6pDpRqPKf//zHGjVqZO+++26Vk7hJlUO4AHfffbe9/vrr9uKLL9qQIUPsqaeesr59+9qCBQuqE121w0jxvOqqqzIqnldeeaU9/vjj1Y67NgKK6f/93//Vxq2zvqcUTzEvdfnhD3/o6/hrr71mf/jDH2zOnDl25JFHFq3yma3iefjhh/t8qy6GP5V14BHciu3ZKvX6mk3+pHjq2S0lxfP222/3/c7555+/AYIWLVrYXXfdtYH7mDFj7NNPPzVdRyCQTwKqn/qrjkgXUHtbqLLDDjvYySefbD/72c+qnMQGVQ7hAnTv3t169uzpg0qbX7dunf3yl7+0J554wn7yk59UJ8qch9l2221zHme+I9xnn33yfQviz5JAu3btLJRHnz59rHfv3t5KIquhlLNMIuuL3lA1ElCs0rZtW9NfUuI8ktc4h0BtEFi7dq395je/sdNPP92aN2++QRIGDRpkDzzwgH9xLCsrS12XMqrnefHixSk3DiCQDwLdunWrdrRbbbWV6a8mZcWKFda0adOsbynDo3RBvdSqn8xWqmXxTEYelNCvv/467ZKG2I466ihr3bq1NWnSxPbYYw/7+9//nuYn04nCafhew0yCoN8TTzzRvvjii5R3DUn86Ec/8ucDBgzwHX58qCLTUPvKlStt+PDh1rVrV28i3nLLLe28887bwAKg+2l467nnnrM999zTp0HDOX/9619T99fB8uXLbdiwYT4+5U/5FIuHHnoozZ9OPvnkE/v+97/vh907depkF198sa1atSrNn9IfH6YJQ0ajR4/2Cr3iVwMry9tnn32WFraikxkzZtixxx5ranw1bPzjH//Yvvnmmw2CPPLII75B1j00PeCQQw6x9957L+VPTGX9kyit4U9DKCqLXXbZJeVXB0qn/PzjH/9IucssL7enn3465SZr4tlnn+0fMpnuVT6yzKhjicvq1avtmmuu8UNrjRs39gqSXnSSecm2/OJxV3asFxkpZKEOhiki999/vy9L1SWlSeUsUV3ZbbfdfL1Xuf3gBz+wjz76KO02YRrGxx9/7FmLu6auhGHDN954w48kyF1vl/fee29a+GzrR3/3cqhpAkp7KDP9boyMGzfODjzwQG81atasmW90spmKMHv2bNtrr71s++23T03PkQIQniOVv1gOHTrUli1blpZEpVkNnZjvvPPOpvuK8b/+9a80f6oPZ511luk5C/Vk33339SM0aR4TJyo71SelTXErHarDkyZNSvMZyl7P+c9//nPr2LGjf7YOOuggmzp1appfDevecMMN1qVLF18X1J48++yzaX429kTtw9FHH+2fH7VDGkbW8xSfYqN25ZJLLvG30vMV6kF8KLqy51+BQ53Npj1T+3b11Vf7slK6Nt98c1NbrU5KovqjdlWM4hKGwst7wQt+NdL21Vdf2SmnnBKc0n7VZ0ji7fGiRYvs0Ucf9cpqmufvTr799ls799xzfdmrLm6zzTa+jJNttersmWee6fOktvLQQw81DT1mEk1DO+mkk2yLLbbw9VF1N7SjmfxX1+3JJ5+0XXfd1d9D6f7d737n+5Pks55tP7h+/Xpfd1VGeo6U/lNPPdVmzpyZlkSV17XXXpuq4+oDVSfV7uivMskVH5WR+tX27dv757dfv342YcIErz+o3sYlmz4nTA3Qy83111/v45E+ojyprGVouPzyy/3zr35VbfzcuXPjt/F+4wxCnL/97W/tpptu8n2d6o9ehNTex0XPbLLs4tczHYcw6rcr6/NDP/nYY4953UzPaBjNnDx5sm9TWrVq5dut3XfffYP+R/dXW676fOedd2ZKTvlurtJkLW6IXS1E9Pbbb6eFue2227y7e6BT7i+99FLkHtxov/32i1yDFjklLnKF7/0pniAvv/yyd9NvEKeoRP/v//2/yA3nRW5YJHr44Yej/fffP3Idf+Q6Fe/NFXDkKrsP6x7iyJmk/Z/cJYMHD45cY++P9Z97iCKnSEXOGhW5YfjohRdeiFzhR65Tj5xCHLmHMeVX4dybRuTeVqL77rsvev755yOnWPl7KT1BXOMeuQ4qchUoUvpdBxg5pSH6/e9/H7z4dIiDKxx/Pzc9wefNVajIFXLKnw7E1lmOU26Bt+tAI/dWH7kOK/rTn/4UuQYgkpub2pDym+lAcSlO5cd1Oj4fSmvIs1PkUsF+/etfR0qT7qN8uMoYuYfB+50yZYr35zqbyA25+jgDb/2Knat43n3WrFner3soIzeUFbkHNXINdOo+7gH2ZeAabu/mFBGfF6Xxj3/8YyQ+v/rVryLX0Pn6EgI6q3rkGnefHnFzDVv0l7/8JXLKgS8n9xIQvPr8ZlN+qQCJAzFzLyRprq5DijbZZJPIvdV5d5W3/On+YuI6Qc9t/vz5qXrpOr7IKWO+DrmOIHKNU+QarFS8qqOhbrhOwufJKT4+XveCFDllM3LWGV9u7kXIu7uXslT4bOuHys8pXpFrkFPPicotW0nycMpK1LBhw8g1Ov7ZdiMd0cCBA3390bMaJKQvtBdOgfNlrXoVnmOnXEauUYvatGnjnyOVv1iI1QEHHOCf2xCf0uEay+h73/te5F5gIzftI3KNuq9Pbug0ePPPudoKPStKq9Kn9iSetpTn2IGebddxRf/85z99u6P255hjjvF12L0cpHyGslda3FCTL2On3ESdO3eOnNIauRemlN/wDJ5xxhmp51d1RmWhNq0y0XPhFLAKvd1xxx3RyJEjfR1UHtwLSuQU8mjHHXeMwjPuXj4jNxzt65Ce7fD8OkXMx53N8y+P8Tqr9rO89kzPv1Myfdm4lwpfVnpGrrjiikisJE5R8unRsxwXPTMqa/1WJGqr1BYmJV7vnFLq60vwI1Zq/9T+uBfltDJw1p7IKW7+uvKmPkJ9hfoMZzQIUfg6qbypjRI3+VM56xlXunUcRM+e6nKPHj18OyC/qmNqS5ySELxF06dP92GV9rgovmzqifoGxannQfVWfWivXr3886I4glSlH3Qvbz5N7mXP999q4/Vcqe8Jz6/iVVule8i/+vk///nP/llwL9Fpac+Ux2z5hPRX9Kv2VgycMujLxI1O+bSKv+ptkGz7nJBePYPuBdS373/7298iN/rj22bVrdAvi41TIL2/cB/9quz0FyTEqbZD/ZnaJv2pfjglL3LTYIJXX4/iZZe6UMGB6p7CKM2V9fnyozJSvXWGkkjt2ltvvRWprVPf7Ywtvs7qORRbxav+Oyk//elPffutupWt/K9GZhEiPNBOM4/UsCxZssRXNDWi7u3Cu4Vo3FuSV+jkLy7qQJVZKRISZVYZ0m95ooZ86dKlvkFQpxR
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=670x453>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Continue the conversation\n",
"output = app.invoke(\n",
" {\"messages\": output[\"messages\"] + [(\"human\", \"now control for model\")]}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "81fb6102-c427-41c1-97cf-54e5944d1c79",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"After controlling for each model, here are the individual correlations between prompt tokens and latency:\n",
"\n",
"- `anthropic_claude_3_sonnet`: Correlation = 0.7659\n",
"- `openai_gpt_3_5_turbo`: Correlation = 0.2833\n",
"- `fireworks_mixtral`: Correlation = 0.1673\n",
"- `cohere_command`: Correlation = 0.1434\n",
"- `google_gemini_pro`: Correlation = 0.4928\n",
"\n",
"These correlations indicate that the `anthropic_claude_3_sonnet` model has the strongest positive correlation between the number of prompt tokens and latency, while the `cohere_command` model has the weakest positive correlation.\n",
"\n",
"Scatter plots were generated for each model individually to illustrate the relationship between prompt tokens and latency. Below are the plots for each model:\n",
"\n",
"1. Model: anthropic_claude_3_sonnet\n",
"![Scatter Plot for anthropic_claude_3_sonnet](sandbox:/2)\n",
"\n",
"2. Model: openai_gpt_3_5_turbo\n",
"![Scatter Plot for openai_gpt_3_5_turbo](sandbox:/2)\n",
"\n",
"3. Model: fireworks_mixtral\n",
"![Scatter Plot for fireworks_mixtral](sandbox:/2)\n",
"\n",
"4. Model: cohere_command\n",
"![Scatter Plot for cohere_command](sandbox:/2)\n",
"\n",
"5. Model: google_gemini_pro\n",
"![Scatter Plot for google_gemini_pro](sandbox:/2)\n",
"\n",
"The plots and correlations together provide an understanding of how latency changes with the number of prompt tokens for each model.\n"
]
}
],
"source": [
"print(output[\"messages\"][-1].content)"
]
},
{
"cell_type": "code",
"execution_count": 46,
"id": "09167fa6-132a-4696-a4ee-eda80a41d3dd",
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAr8AAAH+CAYAAACV9Wa6AADWhklEQVR4AeydB2AU1fbGT3olAUKvohRRARGUqmJ59md99i7P3rD3Z332/uzlb3vWZ++KDUWKiorSEZEWeknvmf/5brjD7LKb7Ca7ySb5joZpd+7c+c3s7jdnzj03zlETGgmQAAmQAAmQAAmQAAm0AgLxreAceYokQAIkQAIkQAIkQAIkYAhQ/PJGIAESIAESIAESIAESaDUEKH5bzaXmiZIACZAACZAACZAACVD88h4gARIgARIgARIgARJoNQQoflvNpeaJkgAJkAAJkAAJkAAJUPzyHiABEiABEiABEiABEmg1BCh+W82l5omSAAmQAAmQAAmQAAlQ/PIeIAESIAESIAESIAESaDUEKH5bzaXmiZIACZAACZAACZAACdRb/D7//PMSFxfn/iUmJkrXrl3luOOOk4ULF9aL7DfffGPqwzRcmzNnjtx0003y119/bbXraaedJttss81W6xt7BdqRmZkZ0mHBFucTKUNdqHPdunWRqlI+/vjjiLYxYg2LcEXr16+Xa665RnbYYQdJT0+XrKwsGTlypDz66KNSUVFR76M1Nr/HHntM8Lmty+y94v18B5ofN25cXVWZ+yPS912dB20mBfCdhO+Eugz8LrjggrqKhbS9se+5kBrVgEKRZBOsGfhNsfc/PhuB7IwzznDLBNpe33X4jIXyOQtUf6j3V6B9uY4EWjqB+Iae4HPPPSdTp06VL774wnxBv//++zJ27FjZuHFjQ6sOa3+I35tvvjmg+L3hhhvknXfeCau+pi4Mpv/85z+buhm1Hh8/pGDekm3evHkydOhQefLJJ+XEE0+Ujz76SF577TXZZZdd5OKLL5a//e1vUlxcXC8Ejc0vVPGL+w73n/17++23zfldeOGF7jpsQ3205kWgse+55kWn9ta2adPGPDxWV1f7FCwsLJT//e9/5qHYZwMXSIAEYpZAYkNbttNOO8nw4cNNNXhCraqqkhtvvFHeffddOf300xtafUT232677SJST2NWAs8irWkJ4F4+6qijJD8/X3744Qfp37+/26CDDjpI9txzT/Om49JLL5UnnnjC3dbcZ3r06CH4s2bfpvTq1ct4vO16TkmgNRE49thj5ZlnnpEvv/zSPPTac3/99dfN797hhx8u//3vf+1qTkmABGKYQIM9v/7nZoXw6tWrfTb99NNPcuihh0r79u0lNTXVeNPeeOMNnzKBFrAfQinwCictLc1Mjz/+eFmyZIlbHK9yjz76aLO81157ua+f7CveQGEPpaWl5lV2nz59JDk5Wbp37y7nn3++bNq0ya0XMzjuIYccIp9++qnx9qEN22+/vfzf//2fTzl4/y6//HJBfTg/nCdYvPrqqz7lsPDHH38IxBNCIHr27CmXXXaZlJWV+ZTDazbvKzacC9ZNnDjRPFSg/oyMDPn73/8uf/75p8++tS0sW7ZMjjzySOOlyM7OlpNOOknWrl271S74Qh81apQ5Btq5//77yy+//OKWA1O89ofZV4KYQijhWuy4445uWcygndgOD4m1n3/+2az74IMP7CpZtWqVnH322UZ84bqAJ7zLlZWVbhnMlJeXy2233WauRUpKinTs2NFw8T+XUK+fT+WbF/C2AG8Urr76ah/ha8vix3C//faTZ5991rQb64OF7oALzt97Twbjh3pQFq+64XGG6MY5IuwCXmev4R5BWX+z94sVruAwe/ZsmTRpkimPfbCuIYa3PLhHEAoCrxi84PAI12Xwpm+77bYyYsQIWbNmjSkeynW3DO+99165//77zb2BexNtmDZtms9h8ZnA90a3bt0Mu86dO8s+++wjv/76q085/4VQvm+wj+X79ddfy7nnnisdOnSQnJwc89nKzc31qRahMVdeeaV06dLFsMKbMTxMRdLwecW9iNAzfEcNHDjQ3LdFRUXuYWr7zKKQ4zjGm7/zzjubOtq1ayf/+Mc/tvp+GadODjg9fvzxR9l9993NOeF63nnnneLvFcX3Kb7fsB33cKdOncx3H+4BHK9fv37mu8Vt5OYZeFPx/YTv5FCsts8J7huE5d1xxx1bVfXtt9+az4P3e2mrQptXDBgwQEaPHr3Vdz9+C/Cdivb6G3jcfffd7vcUzv+UU06R5cuX+xQFC5Tr3bu3+f3Am6VPPvnEp4xdwMO4/a2xv10TJkwQ77W2ZTklARIIQkA/dPUyDXdwtEpHvwB99n/kkUfM+rfeestd/9VXXzn6IXX0i9LRL2lHhaSjX8SmHOqxpj8kZh2m1vRLyfnXv/7lqBBx9Ifb0R9/Rz1ujoodR4WOKaY/oM7tt99u9lVB4egPsPnDetipp57q6JeKmcc/+oXkqJhz9AvR0ZAI5/PPP3f0B9VRMenoK25HhbFbFvupF8xR4eG8+OKLzmeffeaouDPHQnusqWBzVAQ4+qPsoP0ffvihoz8Gzn/+8x9bxLQDHPSHyRxPQ0XMuakQcVTgueUwA7bqQXfXWd4qlh2NL3P0i9F56qmnHP0ydbBOw0zcsoFmUBfqxPlcccUV5jzQVnvOKibd3f797387aBOOg/PQ196OCgxTVgWUKacC3tEfRlOn5Y0p2KkX1KxXEWDK6o+/o+LI0R9l58wzz3SPc9ddd5lroF/mZt3KlSvNuaCN+mPmgM+tt97q6I+muV/sjuqRdQ444ADTHnDTBwJHPTKOPsCY66QPIraoOd9Qrp+7g2fmrLPOMucxd+5cz1rfWX31b8roQ47ZgGsPzph6bfHixWa9vd9r44f9UAeuK+471K1C05wz1uMzYc1eV7tsp/Z+wXFh+qDhqAAx97e9XlgXitm233PPPW7xl19+2bRRBZejb3nM53rYsGHmc/7dd9+55Wz77GdVHw4cFVXOYYcd5uiPtSkX6nW37VDRbljguPgbNGiQqVOFlntcFSpO3759nZdeesl8b+D7SEXYVtfF3WHzTCjfNyhq+YKphoOYzxPuQZzbXnvt5VMtvn/wecLnDt81+NzhXtXYcfOd4FM4wAKuuYrAAFu2rMLn5IEHHnA0LMcBY3wG9cHRpy113XP4bCYlJRlO+I5+5ZVXHH3Qd/TBwdGHE/dg+P5Voe+ocDXHwefvvPPOM/fDCy+84JbD51ofgs3n9JZbbjGMcB00XMjBbwLsoYceMmwWLFjg7ocZfI/jvO33jc9Gz0Kon5MjjjjC0TcXjj5Ee/Z2zHe5PiA5+I4KZva+w/2vD7qOOjecDRs2mOIq4k07cT64RmiP1+x3iD7Imt89XBf8duGzbT8TKG8/J+PHj3e/23GP6AOT+b2zdeIzow8njj5smfsI35FgqMLb2Xvvvc1vmy2L71HcezQSIIGtCfh+UrfeHnSN/fJXj4v54igoKDAfbnxY99hjD58vE3yBQlT6f8GoR9VRT4UDMQODYMCXB6bBDF9e6hUwX6j40FvDj1awffEFgC8Ca/hiR1l90rarzBTCHOshKq1hP3zZqafZrnJKSkoc9bw6ELzW1BPi6GsvuxhwinagfvV4+2xXL7CDH2uvoRy+EK1Z3vgS99r3339v6lQvqHf1VvP2y/WSSy7x2WZFjL6uM+uXLl1qBCl+0L2G64tre8wxx7irA33ZYyN+ZNF+PCzAJk+ebJbV+2V+kM1K/Uc9hY56Uuyi4amePB/W2IgHE9RnfwghBrHsfcBCOTyIYT0EqbVQr58t751CYKM+78OQdzvm8RCCMhDysGD3sP0BxXW0FowftqNOPCx4RQfufXyWIOqs2etql+3U3i84rjUIkT1VuIRrtu1W/OLzCsEA0Wk/u6gT9wgexrzX1LYPP/QQonj4u+iii3z2w+colOtu24HjekWMelENL/sAop06zfKDDz4Y7qluVT7Y943lC9HnNXyn4NpB0MPw4ITlYJ+7UMQJ9q9L/HrbgId7fNfi4Rz7zpw5090c7J7DAxHK3nfffW5ZzOibInMf4rNrDfcQyk6fPt2uMlM8qMGpYA2CF+UgjoMZBDIejCGIvYa6/B8ivNvtPOoP5XNiP5dwolhbsWKF+a7zdzzY7XZq7zvc/7jHca/CyQP
"text/plain": [
"<PIL.PngImagePlugin.PngImageFile image mode=RGBA size=703x510>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"output = app.invoke(\n",
" {\n",
" \"messages\": output[\"messages\"]\n",
" + [(\"human\", \"what about latency vs output tokens\")]\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "f0c48828-07ae-43df-b27f-14fdfbd835f6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The correlation between the number of output tokens (completion_tokens) and latency varies by model, as shown below:\n",
"\n",
"- `anthropic_claude_3_sonnet`: Correlation = 0.910274\n",
"- `cohere_command`: Correlation = 0.910292\n",
"- `fireworks_mixtral`: Correlation = 0.681286\n",
"- `google_gemini_pro`: Correlation = 0.151549\n",
"- `openai_gpt_3_5_turbo`: Correlation = 0.449127\n",
"\n",
"The `anthropic_claude_3_sonnet` and `cohere_command` models show a very strong positive correlation, indicating that an increase in the number of output tokens is associated with a substantial increase in latency for these models. The `fireworks_mixtral` model also shows a strong positive correlation, but less strong than the first two. The `google_gemini_pro` model shows a weak positive correlation, and the `openai_gpt_3_5_turbo` model shows a moderate positive correlation.\n",
"\n",
"Below is the scatter plot with a regression line showing the relationship between output tokens and latency for each model:\n",
"\n",
"![Scatter Plot with Regression Line for Each Model](sandbox:/2)\n"
]
}
],
"source": [
"print(output[\"messages\"][-1].content)"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "4114c16d-c727-49c2-beb1-27c5982b0948",
"metadata": {},
"outputs": [],
"source": [
"output = app.invoke(\n",
" {\n",
" \"messages\": [\n",
" (\n",
" \"human\",\n",
" \"what's the better explanatory variable for latency: input or output tokens?\",\n",
" )\n",
" ]\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "7f983c4a-60b6-4dd6-ab22-2b59971e2fcd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The correlation between input tokens and latency is 0.305, while the correlation between output tokens and latency is 0.487. Therefore, the better explanatory variable for latency is output tokens.\n"
]
}
],
"source": [
"print(output[\"messages\"][-1].content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}