forked from Archives/langchain
10dab053b4
This pull request adds an enum class for the various types of agents used in the project, located in the `agent_types.py` file. Currently, the project is using hardcoded strings for the initialization of these agents, which can lead to errors and make the code harder to maintain. With the introduction of the new enums, the code will be more readable and less error-prone. The new enum members include: - ZERO_SHOT_REACT_DESCRIPTION - REACT_DOCSTORE - SELF_ASK_WITH_SEARCH - CONVERSATIONAL_REACT_DESCRIPTION - CHAT_ZERO_SHOT_REACT_DESCRIPTION - CHAT_CONVERSATIONAL_REACT_DESCRIPTION In this PR, I have also replaced the hardcoded strings with the appropriate enum members throughout the codebase, ensuring a smooth transition to the new approach.
195 lines
5.7 KiB
Plaintext
195 lines
5.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e5715368",
|
|
"metadata": {},
|
|
"source": [
|
|
"# How to track token usage\n",
|
|
"\n",
|
|
"This notebook goes over how to track your token usage for specific calls. It is currently only implemented for the OpenAI API.\n",
|
|
"\n",
|
|
"Let's first look at an extremely simple example of tracking token usage for a single LLM call."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "9455db35",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.llms import OpenAI\n",
|
|
"from langchain.callbacks import get_openai_callback"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "d1c55cc9",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "31667d54",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Total Tokens: 39\n",
|
|
"Prompt Tokens: 4\n",
|
|
"Completion Tokens: 35\n",
|
|
"Successful Requests: 1\n",
|
|
"Total Cost (USD): $0.0007800000000000001\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"with get_openai_callback() as cb:\n",
|
|
" result = llm(\"Tell me a joke\")\n",
|
|
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
|
|
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
|
|
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
|
|
" print(f\"Successful Requests: {cb.successful_requests}\")\n",
|
|
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c0ab6d27",
|
|
"metadata": {},
|
|
"source": [
|
|
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "e09420f4",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"91\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"with get_openai_callback() as cb:\n",
|
|
" result = llm(\"Tell me a joke\")\n",
|
|
" result2 = llm(\"Tell me a joke\")\n",
|
|
" print(cb.total_tokens)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "d8186e7b",
|
|
"metadata": {},
|
|
"source": [
|
|
"If a chain or agent with multiple steps in it is used, it will track all those steps."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "5d1125c6",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.agents import load_tools\n",
|
|
"from langchain.agents import initialize_agent\n",
|
|
"from langchain.agents.agent_types import AgentType\n",
|
|
"from langchain.llms import OpenAI\n",
|
|
"\n",
|
|
"llm = OpenAI(temperature=0)\n",
|
|
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
|
|
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "2f98c536",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
|
|
"\u001B[32;1m\u001B[1;3m I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.\n",
|
|
"Action: Search\n",
|
|
"Action Input: \"Olivia Wilde boyfriend\"\u001B[0m\n",
|
|
"Observation: \u001B[36;1m\u001B[1;3mSudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.\u001B[0m\n",
|
|
"Thought:\u001B[32;1m\u001B[1;3m I need to find out Harry Styles' age.\n",
|
|
"Action: Search\n",
|
|
"Action Input: \"Harry Styles age\"\u001B[0m\n",
|
|
"Observation: \u001B[36;1m\u001B[1;3m29 years\u001B[0m\n",
|
|
"Thought:\u001B[32;1m\u001B[1;3m I need to calculate 29 raised to the 0.23 power.\n",
|
|
"Action: Calculator\n",
|
|
"Action Input: 29^0.23\u001B[0m\n",
|
|
"Observation: \u001B[33;1m\u001B[1;3mAnswer: 2.169459462491557\n",
|
|
"\u001B[0m\n",
|
|
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
|
|
"Final Answer: Harry Styles, Olivia Wilde's boyfriend, is 29 years old and his age raised to the 0.23 power is 2.169459462491557.\u001B[0m\n",
|
|
"\n",
|
|
"\u001B[1m> Finished chain.\u001B[0m\n",
|
|
"Total Tokens: 1506\n",
|
|
"Prompt Tokens: 1350\n",
|
|
"Completion Tokens: 156\n",
|
|
"Total Cost (USD): $0.03012\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"with get_openai_callback() as cb:\n",
|
|
" response = agent.run(\"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?\")\n",
|
|
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
|
|
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
|
|
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
|
|
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "80ca77a3",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.9.1"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|