From f78564d75cfd17890f1c961565e7a7a5fda69e55 Mon Sep 17 00:00:00 2001
From: Bagatur <22008038+baskaryan@users.noreply.github.com>
Date: Thu, 11 Apr 2024 16:42:04 -0700
Subject: [PATCH] docs: show tool msg in tool call docs (#20358)
---
.../model_io/chat/function_calling.ipynb | 707 ++++++++++++++++++
.../model_io/chat/function_calling.mdx | 324 --------
2 files changed, 707 insertions(+), 324 deletions(-)
create mode 100644 docs/docs/modules/model_io/chat/function_calling.ipynb
delete mode 100644 docs/docs/modules/model_io/chat/function_calling.mdx
diff --git a/docs/docs/modules/model_io/chat/function_calling.ipynb b/docs/docs/modules/model_io/chat/function_calling.ipynb
new file mode 100644
index 0000000000..92f66b429e
--- /dev/null
+++ b/docs/docs/modules/model_io/chat/function_calling.ipynb
@@ -0,0 +1,707 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "a413ade7-48f0-4d43-a1f3-d87f550a8018",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_position: 2\n",
+ "title: Tool/function calling\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "50d59b14-c434-4359-be8e-4a21378e762f",
+ "metadata": {},
+ "source": [
+ "# Tool calling\n",
+ "\n",
+ "```{=mdx}\n",
+ ":::info\n",
+ "We use the term tool calling interchangeably with function calling. Although\n",
+ "function calling is sometimes meant to refer to invocations of a single function,\n",
+ "we treat all models as though they can return multiple tool or function calls in \n",
+ "each message.\n",
+ ":::\n",
+ "```\n",
+ "\n",
+ "Tool calling allows a model to respond to a given prompt by generating output that \n",
+ "matches a user-defined schema. While the name implies that the model is performing \n",
+ "some action, this is actually not the case! The model is coming up with the \n",
+ "arguments to a tool, and actually running the tool (or not) is up to the user - \n",
+ "for example, if you want to [extract output matching some schema](/docs/use_cases/extraction/) \n",
+ "from unstructured text, you could give the model an \"extraction\" tool that takes \n",
+ "parameters matching the desired schema, then treat the generated output as your final \n",
+ "result.\n",
+ "\n",
+ "A tool call includes a name, arguments dict, and an optional identifier. The \n",
+ "arguments dict is structured `{argument_name: argument_value}`.\n",
+ "\n",
+ "Many LLM providers, including [Anthropic](https://www.anthropic.com/), \n",
+ "[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai), \n",
+ "[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others, \n",
+ "support variants of a tool calling feature. These features typically allow requests \n",
+ "to the LLM to include available tools and their schemas, and for responses to include \n",
+ "calls to these tools. For instance, given a search engine tool, an LLM might handle a \n",
+ "query by first issuing a call to the search engine. The system calling the LLM can \n",
+ "receive the tool call, execute it, and return the output to the LLM to inform its \n",
+ "response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n",
+ "and supports several methods for defining your own [custom tools](/docs/modules/tools/custom_tools). \n",
+ "Tool-calling is extremely useful for building [tool-using chains and agents](/docs/use_cases/tool_use), \n",
+ "and for getting structured outputs from models more generally.\n",
+ "\n",
+ "Providers adopt different conventions for formatting tool schemas and tool calls. \n",
+ "For instance, Anthropic returns tool calls as parsed structures within a larger content block:\n",
+ "```python\n",
+ "[\n",
+ " {\n",
+ " \"text\": \"\\nI should use a tool.\\n\",\n",
+ " \"type\": \"text\"\n",
+ " },\n",
+ " {\n",
+ " \"id\": \"id_value\",\n",
+ " \"input\": {\"arg_name\": \"arg_value\"},\n",
+ " \"name\": \"tool_name\",\n",
+ " \"type\": \"tool_use\"\n",
+ " }\n",
+ "]\n",
+ "```\n",
+ "whereas OpenAI separates tool calls into a distinct parameter, with arguments as JSON strings:\n",
+ "```python\n",
+ "{\n",
+ " \"tool_calls\": [\n",
+ " {\n",
+ " \"id\": \"id_value\",\n",
+ " \"function\": {\n",
+ " \"arguments\": '{\"arg_name\": \"arg_value\"}',\n",
+ " \"name\": \"tool_name\"\n",
+ " },\n",
+ " \"type\": \"function\"\n",
+ " }\n",
+ " ]\n",
+ "}\n",
+ "```\n",
+ "LangChain implements standard interfaces for defining tools, passing them to LLMs, \n",
+ "and representing tool calls.\n",
+ "\n",
+ "## Passing tools to LLMs\n",
+ "\n",
+ "Chat models supporting tool calling features implement a `.bind_tools` method, which \n",
+ "receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool) \n",
+ "and binds them to the chat model in its expected format. Subsequent invocations of the \n",
+ "chat model will include tool schemas in its calls to the LLM.\n",
+ "\n",
+ "For example, we can define the schema for custom tools using the `@tool` decorator \n",
+ "on Python functions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "id": "841dca72-1b57-4a42-8e22-da4835c4cfe0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.tools import tool\n",
+ "\n",
+ "\n",
+ "@tool\n",
+ "def add(a: int, b: int) -> int:\n",
+ " \"\"\"Adds a and b.\"\"\"\n",
+ " return a + b\n",
+ "\n",
+ "\n",
+ "@tool\n",
+ "def multiply(a: int, b: int) -> int:\n",
+ " \"\"\"Multiplies a and b.\"\"\"\n",
+ " return a * b\n",
+ "\n",
+ "\n",
+ "tools = [add, multiply]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "48058b7d-048d-48e6-a272-3931ad7ad146",
+ "metadata": {},
+ "source": [
+ "Or below, we define the schema using Pydantic:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "fca56328-85e4-4839-97b7-b5dc55920602",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.pydantic_v1 import BaseModel, Field\n",
+ "\n",
+ "\n",
+ "# Note that the docstrings here are crucial, as they will be passed along\n",
+ "# to the model along with the class name.\n",
+ "class Add(BaseModel):\n",
+ " \"\"\"Add two integers together.\"\"\"\n",
+ "\n",
+ " a: int = Field(..., description=\"First integer\")\n",
+ " b: int = Field(..., description=\"Second integer\")\n",
+ "\n",
+ "\n",
+ "class Multiply(BaseModel):\n",
+ " \"\"\"Multiply two integers together.\"\"\"\n",
+ "\n",
+ " a: int = Field(..., description=\"First integer\")\n",
+ " b: int = Field(..., description=\"Second integer\")\n",
+ "\n",
+ "\n",
+ "tools = [Add, Multiply]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ead9068d-11f6-42f3-a508-3c1830189947",
+ "metadata": {},
+ "source": [
+ "We can bind them to chat models as follows:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
+ "\n",
+ "\n",
+ "```\n",
+ "\n",
+ "We can use the `bind_tools()` method to handle converting\n",
+ "`Multiply` to a \"tool\" and binding it to the model (i.e.,\n",
+ "passing it in each time the model is invoked)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 67,
+ "id": "44eb8327-a03d-4c7c-945e-30f13f455346",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# | echo: false\n",
+ "# | output: false\n",
+ "\n",
+ "from langchain_openai import ChatOpenAI\n",
+ "\n",
+ "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 68,
+ "id": "af2a83ac-e43f-43ce-b107-9ed8376bfb75",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "llm_with_tools = llm.bind_tools(tools)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "16208230-f64f-4935-9aa1-280a91f34ba3",
+ "metadata": {},
+ "source": [
+ "## Tool calls\n",
+ "\n",
+ "If tool calls are included in a LLM response, they are attached to the corresponding \n",
+ "[message](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage) \n",
+ "or [message chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
+ "as a list of [tool call](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall) \n",
+ "objects in the `.tool_calls` attribute. A `ToolCall` is a typed dict that includes a \n",
+ "tool name, dict of argument values, and (optionally) an identifier. Messages with no \n",
+ "tool calls default to an empty list for this attribute.\n",
+ "\n",
+ "Example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "1640a4b4-c201-4b23-b257-738d854fb9fd",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[{'name': 'Multiply',\n",
+ " 'args': {'a': 3, 'b': 12},\n",
+ " 'id': 'call_1Tdp5wUXbYQzpkBoagGXqUTo'},\n",
+ " {'name': 'Add',\n",
+ " 'args': {'a': 11, 'b': 49},\n",
+ " 'id': 'call_k9v09vYioS3X0Qg35zESuUKI'}]"
+ ]
+ },
+ "execution_count": 15,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "query = \"What is 3 * 12? Also, what is 11 + 49?\"\n",
+ "\n",
+ "llm_with_tools.invoke(query).tool_calls"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ac3ff0fe-5119-46b8-a578-530245bff23f",
+ "metadata": {},
+ "source": [
+ "The `.tool_calls` attribute should contain valid tool calls. Note that on occasion, \n",
+ "model providers may output malformed tool calls (e.g., arguments that are not \n",
+ "valid JSON). When parsing fails in these cases, instances \n",
+ "of [InvalidToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.InvalidToolCall.html#langchain_core.messages.tool.InvalidToolCall) \n",
+ "are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
+ "a name, string arguments, identifier, and error message.\n",
+ "\n",
+ "If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n",
+ "process the output. For example, we can convert back to the original Pydantic class:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "ca15fcad-74fe-4109-a1b1-346c3eefe238",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[Multiply(a=3, b=12), Add(a=11, b=49)]"
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from langchain_core.output_parsers.openai_tools import PydanticToolsParser\n",
+ "\n",
+ "chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])\n",
+ "chain.invoke(query)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ba3505d-f405-43ba-93c4-7fbd84f6464b",
+ "metadata": {},
+ "source": [
+ "### Streaming\n",
+ "\n",
+ "When tools are called in a streaming context, \n",
+ "[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
+ "will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk) \n",
+ "objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes \n",
+ "optional string fields for the tool `name`, `args`, and `id`, and includes an optional \n",
+ "integer field `index` that can be used to join chunks together. Fields are optional \n",
+ "because portions of a tool call may be streamed across different chunks (e.g., a chunk \n",
+ "that includes a substring of the arguments may have null values for the tool name and id).\n",
+ "\n",
+ "Because message chunks inherit from their parent message class, an \n",
+ "[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk) \n",
+ "with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields. \n",
+ "These fields are parsed best-effort from the message's tool call chunks.\n",
+ "\n",
+ "Note that not all providers currently support streaming for tool calls.\n",
+ "\n",
+ "Example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "4f54a0de-74c7-4f2d-86c5-660aed23840d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n",
+ "[{'name': 'Multiply', 'args': '', 'id': 'call_d39MsxKM5cmeGJOoYKdGBgzc', 'index': 0}]\n",
+ "[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 0}]\n",
+ "[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]\n",
+ "[{'name': None, 'args': '\"b\": 1', 'id': None, 'index': 0}]\n",
+ "[{'name': None, 'args': '2}', 'id': None, 'index': 0}]\n",
+ "[{'name': 'Add', 'args': '', 'id': 'call_QJpdxD9AehKbdXzMHxgDMMhs', 'index': 1}]\n",
+ "[{'name': None, 'args': '{\"a\"', 'id': None, 'index': 1}]\n",
+ "[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]\n",
+ "[{'name': None, 'args': ' \"b\": ', 'id': None, 'index': 1}]\n",
+ "[{'name': None, 'args': '49}', 'id': None, 'index': 1}]\n",
+ "[]\n"
+ ]
+ }
+ ],
+ "source": [
+ "async for chunk in llm_with_tools.astream(query):\n",
+ " print(chunk.tool_call_chunks)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "55046320-3466-4ec1-a1f8-336234ba9019",
+ "metadata": {},
+ "source": [
+ "Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n",
+ "\n",
+ "For example, below we accumulate tool call chunks:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "0a944af0-eedd-43c8-8ff3-f4301f129d9b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n",
+ "[{'name': 'Multiply', 'args': '', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\"', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, ', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 1', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\"', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11,', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": ', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n",
+ "[{'name': 'Multiply', 'args': '{\"a\": 3, \"b\": 12}', 'id': 'call_erKtz8z3e681cmxYKbRof0NS', 'index': 0}, {'name': 'Add', 'args': '{\"a\": 11, \"b\": 49}', 'id': 'call_tYHYdEV2YBvzDcSCiFCExNvw', 'index': 1}]\n"
+ ]
+ }
+ ],
+ "source": [
+ "first = True\n",
+ "async for chunk in llm_with_tools.astream(query):\n",
+ " if first:\n",
+ " gathered = chunk\n",
+ " first = False\n",
+ " else:\n",
+ " gathered = gathered + chunk\n",
+ "\n",
+ " print(gathered.tool_call_chunks)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "db4e3e3a-3553-44dc-bd31-149c0981a06a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(type(gathered.tool_call_chunks[0][\"args\"]))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "95e92826-6e55-4684-9498-556f357f73ac",
+ "metadata": {},
+ "source": [
+ "And below we accumulate tool calls to demonstrate partial parsing:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "e9402bde-d4b5-4564-a99e-f88c9b46b28a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n",
+ "[]\n",
+ "[{'name': 'Multiply', 'args': {}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n",
+ "[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_BXqUtt6jYCwR1DguqpS2ehP0'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_UjSHJKROSAw2BDc8cp9cSv4i'}]\n"
+ ]
+ }
+ ],
+ "source": [
+ "first = True\n",
+ "async for chunk in llm_with_tools.astream(query):\n",
+ " if first:\n",
+ " gathered = chunk\n",
+ " first = False\n",
+ " else:\n",
+ " gathered = gathered + chunk\n",
+ "\n",
+ " print(gathered.tool_calls)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "8c2f21cc-0c6d-416a-871f-e854621c96e2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(type(gathered.tool_calls[0][\"args\"]))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "97a0c977-0c3c-4011-b49b-db98c609d0ce",
+ "metadata": {},
+ "source": [
+ "## Passing tool outputs to model\n",
+ "\n",
+ "If we're using the model-generated tool invocations to actually call tools and want to pass the tool results back to the model, we can do so using `ToolMessage`s."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 117,
+ "id": "48049192-be28-42ab-9a44-d897924e67cd",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[HumanMessage(content='What is 3 * 12? Also, what is 11 + 49?'),\n",
+ " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_K5DsWEmgt6D08EI9AFu9NaL1', 'function': {'arguments': '{\"a\": 3, \"b\": 12}', 'name': 'Multiply'}, 'type': 'function'}, {'id': 'call_qywVrsplg0ZMv7LHYYMjyG81', 'function': {'arguments': '{\"a\": 11, \"b\": 49}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 50, 'prompt_tokens': 105, 'total_tokens': 155}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-1a0b8cdd-9221-4d94-b2ed-5701f67ce9fe-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_K5DsWEmgt6D08EI9AFu9NaL1'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_qywVrsplg0ZMv7LHYYMjyG81'}]),\n",
+ " ToolMessage(content='36', tool_call_id='call_K5DsWEmgt6D08EI9AFu9NaL1'),\n",
+ " ToolMessage(content='60', tool_call_id='call_qywVrsplg0ZMv7LHYYMjyG81')]"
+ ]
+ },
+ "execution_count": 117,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from langchain_core.messages import HumanMessage, ToolMessage\n",
+ "\n",
+ "messages = [HumanMessage(query)]\n",
+ "ai_msg = llm_with_tools.invoke(messages)\n",
+ "messages.append(ai_msg)\n",
+ "for tool_call in ai_msg.tool_calls:\n",
+ " selected_tool = {\"add\": add, \"multiply\": multiply}[tool_call[\"name\"].lower()]\n",
+ " tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
+ " messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
+ "messages"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 118,
+ "id": "611e0f36-d736-48d1-bca1-1cec51d223f3",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "AIMessage(content='3 * 12 is 36 and 11 + 49 is 60.', response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 171, 'total_tokens': 189}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-a6c8093c-b16a-4c92-8308-7c9ac998118c-0')"
+ ]
+ },
+ "execution_count": 118,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "llm_with_tools.invoke(messages)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a5937498-d6fe-400a-b192-ef35c314168e",
+ "metadata": {},
+ "source": [
+ "## Few-shot prompting\n",
+ "\n",
+ "For more complex tool use it's very useful to add few-shot examples to the prompt. We can do this by adding `AIMessage`s with `ToolCall`s and corresponding `ToolMessage`s to our prompt.\n",
+ "\n",
+ "For example, even with some special instructions our model can get tripped up by order of operations:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 112,
+ "id": "5ef2e7c3-0925-49da-ab8f-e42c4fa40f29",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[{'name': 'Multiply',\n",
+ " 'args': {'a': 119, 'b': 8},\n",
+ " 'id': 'call_Dl3FXRVkQCFW4sUNYOe4rFr7'},\n",
+ " {'name': 'Add',\n",
+ " 'args': {'a': 952, 'b': -20},\n",
+ " 'id': 'call_n03l4hmka7VZTCiP387Wud2C'}]"
+ ]
+ },
+ "execution_count": 112,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "llm_with_tools.invoke(\n",
+ " \"Whats 119 times 8 minus 20. Don't do any math yourself, only use tools for math. Respect order of operations\"\n",
+ ").tool_calls"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a5249069-b5f8-40ac-ae74-30d67c4e9168",
+ "metadata": {},
+ "source": [
+ "The model shouldn't be trying to add anything yet, since it technically can't know the results of 119 * 8 yet.\n",
+ "\n",
+ "By adding a prompt with some examples we can correct this behavior:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 107,
+ "id": "7b2e8b19-270f-4e1a-8be7-7aad704c1cf4",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[{'name': 'Multiply',\n",
+ " 'args': {'a': 119, 'b': 8},\n",
+ " 'id': 'call_MoSgwzIhPxhclfygkYaKIsGZ'}]"
+ ]
+ },
+ "execution_count": 107,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from langchain_core.messages import AIMessage\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from langchain_core.runnables import RunnablePassthrough\n",
+ "\n",
+ "examples = [\n",
+ " HumanMessage(\n",
+ " \"What's the product of 317253 and 128472 plus four\", name=\"example_user\"\n",
+ " ),\n",
+ " AIMessage(\n",
+ " \"\",\n",
+ " name=\"example_assistant\",\n",
+ " tool_calls=[\n",
+ " {\"name\": \"Multiply\", \"args\": {\"x\": 317253, \"y\": 128472}, \"id\": \"1\"}\n",
+ " ],\n",
+ " ),\n",
+ " ToolMessage(\"16505054784\", tool_call_id=\"1\"),\n",
+ " AIMessage(\n",
+ " \"\",\n",
+ " name=\"example_assistant\",\n",
+ " tool_calls=[{\"name\": \"Add\", \"args\": {\"x\": 16505054784, \"y\": 4}, \"id\": \"2\"}],\n",
+ " ),\n",
+ " ToolMessage(\"16505054788\", tool_call_id=\"2\"),\n",
+ " AIMessage(\n",
+ " \"The product of 317253 and 128472 plus four is 16505054788\",\n",
+ " name=\"example_assistant\",\n",
+ " ),\n",
+ "]\n",
+ "\n",
+ "system = \"\"\"You are bad at math but are an expert at using a calculator. \n",
+ "\n",
+ "Use past tool usage as an example of how to correctly use the tools.\"\"\"\n",
+ "few_shot_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\"system\", system),\n",
+ " *examples,\n",
+ " (\"human\", \"{query}\"),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "chain = {\"query\": RunnablePassthrough()} | few_shot_prompt | llm_with_tools\n",
+ "chain.invoke(\"Whats 119 times 8 minus 20\").tool_calls"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "19160e3e-3eb5-4e9a-ae56-74a2dce0af32",
+ "metadata": {},
+ "source": [
+ "Seems like we get the correct output this time.\n",
+ "\n",
+ "Here's what the [LangSmith trace](https://smith.langchain.com/public/f70550a1-585f-4c9d-a643-13148ab1616f/r) looks like."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "020cfd3b-0838-49d0-96bb-7cd919921833",
+ "metadata": {},
+ "source": [
+ "## Next steps\n",
+ "\n",
+ "- **Output parsing**: See [OpenAI Tools output\n",
+ " parsers](/docs/modules/model_io/output_parsers/types/openai_tools/)\n",
+ " and [OpenAI Functions output\n",
+ " parsers](/docs/modules/model_io/output_parsers/types/openai_functions/)\n",
+ " to learn about extracting the function calling API responses into\n",
+ " various formats.\n",
+ "- **Structured output chains**: [Some models have constructors](/docs/modules/model_io/chat/structured_output/) that\n",
+ " handle creating a structured output chain for you.\n",
+ "- **Tool use**: See how to construct chains and agents that\n",
+ " call the invoked tools in [these\n",
+ " guides](/docs/use_cases/tool_use/)."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "poetry-venv-2",
+ "language": "python",
+ "name": "poetry-venv-2"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.1"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/docs/modules/model_io/chat/function_calling.mdx b/docs/docs/modules/model_io/chat/function_calling.mdx
deleted file mode 100644
index 89f420fd0d..0000000000
--- a/docs/docs/modules/model_io/chat/function_calling.mdx
+++ /dev/null
@@ -1,324 +0,0 @@
----
-sidebar_position: 2
-title: Tool/function calling
----
-
-# Tool calling
-
-:::info
-We use the term tool calling interchangeably with function calling. Although
-function calling is sometimes meant to refer to invocations of a single function,
-we treat all models as though they can return multiple tool or function calls in
-each message.
-:::
-
-# Calling Tools
-
-Tool calling allows a model to respond to a given prompt by generating output that
-matches a user-defined schema. While the name implies that the model is performing
-some action, this is actually not the case! The model is coming up with the
-arguments to a tool, and actually running the tool (or not) is up to the user -
-for example, if you want to [extract output matching some schema](/docs/use_cases/extraction/)
-from unstructured text, you could give the model an "extraction" tool that takes
-parameters matching the desired schema, then treat the generated output as your final
-result.
-
-A tool call includes a name, arguments dict, and an optional identifier. The
-arguments dict is structured `{argument_name: argument_value}`.
-
-Many LLM providers, including [Anthropic](https://www.anthropic.com/),
-[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
-[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others,
-support variants of a tool calling feature. These features typically allow requests
-to the LLM to include available tools and their schemas, and for responses to include
-calls to these tools. For instance, given a search engine tool, an LLM might handle a
-query by first issuing a call to the search engine. The system calling the LLM can
-receive the tool call, execute it, and return the output to the LLM to inform its
-response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/)
-and supports several methods for defining your own [custom tools](/docs/modules/tools/custom_tools).
-Tool-calling is extremely useful for building [tool-using chains and agents](/docs/use_cases/tool_use),
-and for getting structured outputs from models more generally.
-
-Providers adopt different conventions for formatting tool schemas and tool calls.
-For instance, Anthropic returns tool calls as parsed structures within a larger content block:
-```
-[
- {
- "text": "\nI should use a tool.\n",
- "type": "text"
- },
- {
- "id": "id_value",
- "input": {"arg_name": "arg_value"},
- "name": "tool_name",
- "type": "tool_use"
- }
-]
-```
-whereas OpenAI separates tool calls into a distinct parameter, with arguments as JSON strings:
-```
-{
- "tool_calls": [
- {
- "id": "id_value",
- "function": {
- "arguments": '{"arg_name": "arg_value"}',
- "name": "tool_name"
- },
- "type": "function"
- }
- ]
-}
-```
-LangChain implements standard interfaces for defining tools, passing them to LLMs,
-and representing tool calls.
-
-## Passing tools to LLMs
-
-Chat models supporting tool calling features implement a `.bind_tools` method, which
-receives a list of LangChain [tool objects](https://api.python.langchain.com/en/latest/tools/langchain_core.tools.BaseTool.html#langchain_core.tools.BaseTool)
-and binds them to the chat model in its expected format. Subsequent invocations of the
-chat model will include tool schemas in its calls to the LLM.
-
-For example, we can define the schema for custom tools using the `@tool` decorator
-on Python functions:
-
-```python
-from langchain.tools import tool
-
-
-@tool
-def add(a: int, b: int) -> int:
- """Adds a and b."""
- return a + b
-
-
-@tool
-def multiply(a: int, b: int) -> int:
- """Multiplies a and b."""
- return a * b
-
-
-tools = [add, multiply]
-```
-
-Or below, we define the schema using Pydantic:
-```python
-from langchain_core.pydantic_v1 import BaseModel, Field
-
-
-# Note that the docstrings here are crucial, as they will be passed along
-# to the model along with the class name.
-class Add(BaseModel):
- """Add two integers together."""
-
- a: int = Field(..., description="First integer")
- b: int = Field(..., description="Second integer")
-
-
-class Multiply(BaseModel):
- """Multiply two integers together."""
-
- a: int = Field(..., description="First integer")
- b: int = Field(..., description="Second integer")
-
-
-tools = [Add, Multiply]
-```
-
-We can bind them to chat models as follows:
-
-import Tabs from "@theme/Tabs";
-import TabItem from "@theme/TabItem";
-
-import ChatModelTabs from "@theme/ChatModelTabs";
-
-
-
-We can use the `bind_tools()` method to handle converting
-`Multiply` to a "tool" and binding it to the model (i.e.,
-passing it in each time the model is invoked).
-
-```python
-llm_with_tools = llm.bind_tools(tools)
-```
-
-## Tool calls
-
-If tool calls are included in a LLM response, they are attached to the corresponding
-[message](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage)
-or [message chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk)
-as a list of [tool call](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall)
-objects in the `.tool_calls` attribute. A `ToolCall` is a typed dict that includes a
-tool name, dict of argument values, and (optionally) an identifier. Messages with no
-tool calls default to an empty list for this attribute.
-
-Example:
-
-```python
-query = "What is 3 * 12? Also, what is 11 + 49?"
-
-llm_with_tools.invoke(query).tool_calls
-```
-```text
-[{'name': 'Multiply',
- 'args': {'a': 3, 'b': 12},
- 'id': 'call_viACG45wBz9jYzljHIwHamXw'},
- {'name': 'Add',
- 'args': {'a': 11, 'b': 49},
- 'id': 'call_JMFUqoi5L27rGeMuII4MJMWo'}]
-```
-
-The `.tool_calls` attribute should contain valid tool calls. Note that on occasion,
-model providers may output malformed tool calls (e.g., arguments that are not
-valid JSON). When parsing fails in these cases, instances
-of [InvalidToolCall](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.InvalidToolCall.html#langchain_core.messages.tool.InvalidToolCall)
-are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have
-a name, string arguments, identifier, and error message.
-
-If desired, [output parsers](/docs/modules/model_io/output_parsers) can further
-process the output. For example, we can convert back to the original Pydantic class:
-
-```python
-from langchain_core.output_parsers.openai_tools import PydanticToolsParser
-
-chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])
-chain.invoke(query)
-```
-```text
-[Multiply(a=3, b=12), Add(a=11, b=49)]
-```
-
-### Streaming
-
-When tools are called in a streaming context,
-[message chunks](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk)
-will be populated with [tool call chunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCallChunk.html#langchain_core.messages.tool.ToolCallChunk)
-objects in a list via the `.tool_call_chunks` attribute. A `ToolCallChunk` includes
-optional string fields for the tool `name`, `args`, and `id`, and includes an optional
-integer field `index` that can be used to join chunks together. Fields are optional
-because portions of a tool call may be streamed across different chunks (e.g., a chunk
-that includes a substring of the arguments may have null values for the tool name and id).
-
-Because message chunks inherit from their parent message class, an
-[AIMessageChunk](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html#langchain_core.messages.ai.AIMessageChunk)
-with tool call chunks will also include `.tool_calls` and `.invalid_tool_calls` fields.
-These fields are parsed best-effort from the message's tool call chunks.
-
-Note that not all providers currently support streaming for tool calls.
-
-Example:
-
-```python
-async for chunk in llm_with_tools.astream(query):
- print(chunk.tool_call_chunks)
-```
-
-```text
-[]
-[{'name': 'Multiply', 'args': '', 'id': 'call_Al2xpR4uFPXQUDzGTSawMOah', 'index': 0}]
-[{'name': None, 'args': '{"a"', 'id': None, 'index': 0}]
-[{'name': None, 'args': ': 3, ', 'id': None, 'index': 0}]
-[{'name': None, 'args': '"b": 1', 'id': None, 'index': 0}]
-[{'name': None, 'args': '2}', 'id': None, 'index': 0}]
-[{'name': 'Add', 'args': '', 'id': 'call_VV6ck8JSQ6joKtk2xGtNKgXf', 'index': 1}]
-[{'name': None, 'args': '{"a"', 'id': None, 'index': 1}]
-[{'name': None, 'args': ': 11,', 'id': None, 'index': 1}]
-[{'name': None, 'args': ' "b": ', 'id': None, 'index': 1}]
-[{'name': None, 'args': '49}', 'id': None, 'index': 1}]
-[]
-```
-
-Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.
-
-For example, below we accumulate tool call chunks:
-
-```python
-first = True
-async for chunk in llm_with_tools.astream(query):
- if first:
- gathered = chunk
- first = False
- else:
- gathered = gathered + chunk
-
- print(gathered.tool_call_chunks)
-```
-
-```text
-[]
-[{'name': 'Multiply', 'args': '', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}]
-[{'name': 'Multiply', 'args': '{"a"', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}]
-[{'name': 'Multiply', 'args': '{"a": 3, ', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 1', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '{"a"', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '{"a": 11,', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": ', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-[{'name': 'Multiply', 'args': '{"a": 3, "b": 12}', 'id': 'call_2MG1IGft6WmgMooqZgJ07JX6', 'index': 0}, {'name': 'Add', 'args': '{"a": 11, "b": 49}', 'id': 'call_uGot9MOHDcz67Bj0h13c7QA5', 'index': 1}]
-```
-
-```python
-print(type(gathered.tool_call_chunks[0]["args"]))
-```
-
-```text
-
-```
-
-And below we accumulate tool calls to demonstrate partial parsing:
-
-```python
-first = True
-async for chunk in llm_with_tools.astream(query):
- if first:
- gathered = chunk
- first = False
- else:
- gathered = gathered + chunk
-
- print(gathered.tool_calls)
-```
-
-```text
-[]
-[]
-[{'name': 'Multiply', 'args': {}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}]
-[{'name': 'Multiply', 'args': {'a': 3}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 1}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}, {'name': 'Add', 'args': {}, 'id': 'call_zPAyMWr8hN1q083GWGX2dSiB'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_zPAyMWr8hN1q083GWGX2dSiB'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}, {'name': 'Add', 'args': {'a': 11}, 'id': 'call_zPAyMWr8hN1q083GWGX2dSiB'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_zPAyMWr8hN1q083GWGX2dSiB'}]
-[{'name': 'Multiply', 'args': {'a': 3, 'b': 12}, 'id': 'call_z3B4o82SQDY5NCnmrXIcVQo4'}, {'name': 'Add', 'args': {'a': 11, 'b': 49}, 'id': 'call_zPAyMWr8hN1q083GWGX2dSiB'}]
-```
-
-```python
-print(type(gathered.tool_calls[0]["args"]))
-```
-
-```text
-
-```
-
-
-## Next steps
-
-- **Output parsing**: See [OpenAI Tools output
- parsers](/docs/modules/model_io/output_parsers/types/openai_tools/)
- and [OpenAI Functions output
- parsers](/docs/modules/model_io/output_parsers/types/openai_functions/)
- to learn about extracting the function calling API responses into
- various formats.
-- **Structured output chains**: [Some models have constructors](/docs/modules/model_io/chat/structured_output/) that
- handle creating a structured output chain for you.
-- **Tool use**: See how to construct chains and agents that actually
- call the invoked tools in [these
- guides](/docs/use_cases/tool_use/).