docs: update func calling doc (#18300)

pull/18334/head
Bagatur 7 months ago committed by GitHub
parent 68ad3414a2
commit 6a5b084704
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

1
docs/.gitignore vendored

@ -0,0 +1 @@
/.quarto/

@ -1,492 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "dae8d4ed-9150-45da-b494-7717ab0a2960",
"metadata": {},
"source": [
"# Function calling\n",
"\n",
"Certain chat models, like [OpenAI's](https://platform.openai.com/docs/guides/function-calling), have a function-calling API that lets you describe functions and their arguments, and have the model return a JSON object with a function to invoke and the inputs to that function. Function-calling is extremely useful for building [tool-using chains and agents](/docs/use_cases/tool_use/), and for getting structured outputs from models more generally.\n",
"\n",
"LangChain comes with a number of utilities to make function-calling easy. Namely, it comes with\n",
"\n",
"* simple syntax for binding functions to models\n",
"* converters for formatting various types of objects to the expected function schemas\n",
"* output parsers for extracting the function invocations from API responses\n",
"\n",
"We'll focus here on the first two bullets. To see how output parsing works as well check out the [OpenAI Tools output parsers](/docs/modules/model_io/output_parsers/types/openai_tools)."
]
},
{
"cell_type": "markdown",
"id": "a177c64b-7c99-495c-b362-5ed3b40aa26a",
"metadata": {},
"source": [
"## Defining functions\n",
"\n",
"We'll focus on the [OpenAI function format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools) here since as of this writing that is the main model provider that supports function calling. LangChain has a built-in converter that can turn Python functions, Pydantic classes, and LangChain Tools into the OpenAI function format:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f6d1dc0c-6170-4977-809f-365099f628ea",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-core langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "6bd290bd-7621-466b-a73e-fc8480f879ec",
"metadata": {},
"source": [
"### Python function"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "41ebab5c-0e9f-4b49-86ee-9290ced2fe96",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"multiply\",\n",
" \"description\": \"Multiply two integers together.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"a\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"First integer\"\n",
" },\n",
" \"b\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"Second integer\"\n",
" }\n",
" },\n",
" \"required\": [\n",
" \"a\",\n",
" \"b\"\n",
" ]\n",
" }\n",
" }\n",
"}\n"
]
}
],
"source": [
"import json\n",
"\n",
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
"\n",
"\n",
"def multiply(a: int, b: int) -> int:\n",
" \"\"\"Multiply two integers together.\n",
"\n",
" Args:\n",
" a: First integer\n",
" b: Second integer\n",
" \"\"\"\n",
" return a * b\n",
"\n",
"\n",
"print(json.dumps(convert_to_openai_tool(multiply), indent=2))"
]
},
{
"cell_type": "markdown",
"id": "ecf22577-38ab-48f1-ba0b-371aaba1bacc",
"metadata": {},
"source": [
"### Pydantic class"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ecc8ffd4-aed3-4f47-892d-1896cc1ca4dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"multiply\",\n",
" \"description\": \"Multiply two integers together.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"a\": {\n",
" \"description\": \"First integer\",\n",
" \"type\": \"integer\"\n",
" },\n",
" \"b\": {\n",
" \"description\": \"Second integer\",\n",
" \"type\": \"integer\"\n",
" }\n",
" },\n",
" \"required\": [\n",
" \"a\",\n",
" \"b\"\n",
" ]\n",
" }\n",
" }\n",
"}\n"
]
}
],
"source": [
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class multiply(BaseModel):\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"print(json.dumps(convert_to_openai_tool(multiply), indent=2))"
]
},
{
"cell_type": "markdown",
"id": "b83d5a88-50ed-4ae4-85cf-8b895617496f",
"metadata": {},
"source": [
"### LangChain Tool"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "696c7dd6-660c-4797-909f-bf878b3acf93",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"multiply\",\n",
" \"description\": \"Multiply two integers together.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"a\": {\n",
" \"description\": \"First integer\",\n",
" \"type\": \"integer\"\n",
" },\n",
" \"b\": {\n",
" \"description\": \"Second integer\",\n",
" \"type\": \"integer\"\n",
" }\n",
" },\n",
" \"required\": [\n",
" \"a\",\n",
" \"b\"\n",
" ]\n",
" }\n",
" }\n",
"}\n"
]
}
],
"source": [
"from typing import Any, Type\n",
"\n",
"from langchain_core.tools import BaseTool\n",
"\n",
"\n",
"class MultiplySchema(BaseModel):\n",
" \"\"\"Multiply tool schema.\"\"\"\n",
"\n",
" a: int = Field(..., description=\"First integer\")\n",
" b: int = Field(..., description=\"Second integer\")\n",
"\n",
"\n",
"class Multiply(BaseTool):\n",
" args_schema: Type[BaseModel] = MultiplySchema\n",
" name: str = \"multiply\"\n",
" description: str = \"Multiply two integers together.\"\n",
"\n",
" def _run(self, a: int, b: int, **kwargs: Any) -> Any:\n",
" return a * b\n",
"\n",
"\n",
"# Note: we're passing in a Multiply object not the class itself.\n",
"print(json.dumps(convert_to_openai_tool(Multiply()), indent=2))"
]
},
{
"cell_type": "markdown",
"id": "04bda177-202f-4811-bb74-f3fa7094a15b",
"metadata": {},
"source": [
"## Binding functions\n",
"\n",
"Now that we've defined a function, we'll want to pass it in to our model."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a5aa93a7-6859-43e8-be85-619d975b908c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_JvOu9oUwMrQHiDekZTbpNCHY', 'function': {'arguments': '{\\n \"a\": 5,\\n \"b\": 3\\n}', 'name': 'multiply'}, 'type': 'function'}]})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"llm.invoke(\"what's 5 times three\", tools=[convert_to_openai_tool(multiply)])"
]
},
{
"cell_type": "markdown",
"id": "dd0e7365-32d0-46a3-b8f2-caf27d5d9262",
"metadata": {},
"source": [
"And if we wanted this function to be passed in every time we call the tool, we could bind it to the tool:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "87165d64-31a7-4332-965e-18fa939fda50",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_cwRoTnD1ux1SnWXLrTj2KlWH', 'function': {'arguments': '{\\n \"a\": 5,\\n \"b\": 3\\n}', 'name': 'multiply'}, 'type': 'function'}]})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tool = llm.bind(tools=[convert_to_openai_tool(multiply)])\n",
"llm_with_tool.invoke(\"what's 5 times three\")"
]
},
{
"cell_type": "markdown",
"id": "21b4d000-3828-4e32-9226-55119f47ee67",
"metadata": {},
"source": [
"We can also enforce that a tool is called using the [tool_choice](https://platform.openai.com/docs/api-reference/chat/create#chat-create-tools) parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2daa354c-cc85-4a60-a9b2-b681ec22ca33",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_sWjLyioSZAtYMQRLMTzncz1v', 'function': {'arguments': '{\\n \"a\": 5,\\n \"b\": 4\\n}', 'name': 'multiply'}, 'type': 'function'}]})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tool = llm.bind(\n",
" tools=[convert_to_openai_tool(multiply)],\n",
" tool_choice={\"type\": \"function\", \"function\": {\"name\": \"multiply\"}},\n",
")\n",
"llm_with_tool.invoke(\n",
" \"don't answer my question. no do answer my question. no don't. what's five times four\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ce013d11-49ea-4de9-8bbc-bc9ae203002c",
"metadata": {},
"source": [
"The [ChatOpenAI](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI) class even comes with a `bind_tools` helper function that handles converting function-like objects to the OpenAI format and binding them for you:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "842c9914-ac28-428f-9fcc-556177e8e715",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_LCdBa4cbhMJPRdtkhDzpRh7x', 'function': {'arguments': '{\\n \"a\": 5,\\n \"b\": 3\\n}', 'name': 'multiply'}, 'type': 'function'}]})"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_tool = llm.bind_tools([multiply], tool_choice=\"multiply\")\n",
"llm_with_tool.invoke(\"what's 5 times three\")"
]
},
{
"cell_type": "markdown",
"id": "7d6e22d8-9f33-4845-9364-0d276df35ff5",
"metadata": {},
"source": [
"## Legacy args `functions` and `function_call`\n",
"\n",
"Until Fall of 2023 the OpenAI API expected arguments `functions` and `funtion_call` instead of `tools` and `tool_choice`, and they had a slightly different format than `tools` and `tool_choice`. LangChain maintains utilities for using the old API if you need to use that as well:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a317f71e-177e-404b-b09c-8fb365a4d8a2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'name': 'multiply',\n",
" 'description': 'Multiply two integers together.',\n",
" 'parameters': {'type': 'object',\n",
" 'properties': {'a': {'description': 'First integer', 'type': 'integer'},\n",
" 'b': {'description': 'Second integer', 'type': 'integer'}},\n",
" 'required': ['a', 'b']}}"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.utils.function_calling import convert_to_openai_function\n",
"\n",
"convert_to_openai_function(multiply)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "dd124259-75e2-4704-9f57-824d3e463bfa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n \"a\": 3,\\n \"b\": 1000000\\n}', 'name': 'multiply'}})"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_functions = llm.bind(\n",
" functions=[convert_to_openai_function(multiply)], function_call={\"name\": \"multiply\"}\n",
")\n",
"llm_with_functions.invoke(\"what's 3 times a million\")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "d9a90af9-1c81-4ace-b155-1589f7308a1c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{\\n \"a\": 3,\\n \"b\": 1000000\\n}', 'name': 'multiply'}})"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_with_functions = llm.bind_functions([multiply], function_call=\"multiply\")\n",
"llm_with_functions.invoke(\"what's 3 times a million\")"
]
},
{
"cell_type": "markdown",
"id": "7779808d-d75c-4d76-890d-ba8c6c571514",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"* **Output parsing**: See [OpenAI Tools output parsers](/docs/modules/model_io/output_parsers/types/openai_tools) and [OpenAI Functions output parsers](/docs/modules/model_io/output_parsers/types/openai_functions) to learn about extracting the function calling API responses into various formats.\n",
"* **Tool use**: See how to construct chains and agents that actually call the invoked tools in [these guides](/docs/use_cases/tool_use/)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -0,0 +1,535 @@
---
sidebar_position: 1
title: Function calling
---
# Function calling
A growing number of chat models, like
[OpenAI](https://platform.openai.com/docs/guides/function-calling),
[Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling),
etc., have a function-calling API that lets you describe functions and
their arguments, and have the model return a JSON object with a function
to invoke and the inputs to that function. Function-calling is extremely
useful for building [tool-using chains and
agents](../../../../docs/use_cases/tool_use/), and for getting
structured outputs from models more generally.
LangChain comes with a number of utilities to make function-calling
easy. Namely, it comes with:
- simple syntax for binding functions to models
- converters for formatting various types of objects to the expected
function schemas
- output parsers for extracting the function invocations from API
responses
- chains for getting structured outputs from a model, built on top of
function calling
Well focus here on the first two points. For a detailed guide on output
parsing check out the [OpenAI Tools output
parsers](../../../../docs/modules/model_io/output_parsers/types/openai_tools)
and to see the structured output chains check out the [Structured output
guide](../../../../docs/guides/structured_output).
Before getting started make sure you have `langchain-core` installed.
```python
%pip install -qU langchain-core langchain-openai
```
```python
import getpass
import os
```
## Binding functions
A number of models implement helper methods that will take care of
formatting and binding different function-like objects to the model.
Lets take a look at how we might take the following Pydantic function
schema and get different models to invoke it:
```python
from langchain_core.pydantic_v1 import BaseModel, Field
# Note that the docstrings here are crucial, as they will be passed along
# to the model along with the class name.
class Multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
```
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
<Tabs>
<TabItem value="openai" label="OpenAI" default>
Set up dependencies and API keys:
```python
%pip install -qU langchain-openai
```
```python
os.environ["OPENAI_API_KEY"] = getpass.getpass()
```
We can use the `ChatOpenAI.bind_tools()` method to handle converting
`Multiply` to an OpenAI function and binding it to the model (i.e.,
passing it in each time the model is invoked).
```python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Q8ZQ97Qrj5zalugSkYMGV1Uo', 'function': {'arguments': '{"a":3,"b":12}', 'name': 'Multiply'}, 'type': 'function'}]})
```
We can add a tool parser to extract the tool calls from the generated
message to JSON:
```python
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
```
``` text
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
```
Or back to the original Pydantic class:
```python
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
```
``` text
[Multiply(a=3, b=12)]
```
If we wanted to force that a tool is used (and that it is used only
once), we can set the `tool_choice` argument:
```python
llm_with_multiply = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_multiply.invoke(
"make up some numbers if you really want but I'm not forcing you"
)
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_f3DApOzb60iYjTfOhVFhDRMI', 'function': {'arguments': '{"a":5,"b":10}', 'name': 'Multiply'}, 'type': 'function'}]})
```
For more see the [ChatOpenAI API
reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.bind_tools).
</TabItem>
<TabItem value="fireworks" label="Fireworks">
Install dependencies and set API keys:
```python
%pip install -qU langchain-fireworks
```
```python
os.environ["FIREWORKS_API_KEY"] = getpass.getpass()
```
We can use the `ChatFireworks.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
```python
from langchain_fireworks import ChatFireworks
llm = ChatFireworks(model="accounts/fireworks/models/firefunction-v1", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
```
``` text
AIMessage(content='Three multiplied by twelve is 36.')
```
If our model isnt using the tool, as is the case here, we can force
tool usage by specifying `tool_choice="any"` or by specifying the name
of the specific tool we want used:
```python
llm_with_tools = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_tools.invoke("what's 3 * 12")
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'index': 0, 'id': 'call_qIP2bJugb67LGvc6Zhwkvfqc', 'type': 'function', 'function': {'name': 'Multiply', 'arguments': '{"a": 3, "b": 12}'}}]})
```
We can add a tool parser to extract the tool calls from the generated
message to JSON:
```python
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
```
``` text
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
```
Or back to the original Pydantic class:
```python
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
```
``` text
[Multiply(a=3, b=12)]
```
For more see the [ChatFireworks](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks.bind_tools) reference.
</TabItem>
<TabItem value="mistral" label="Mistral">
Install dependencies and set API keys:
```python
%pip install -qU langchain-mistralai
```
```python
os.environ["MISTRAL_API_KEY"] = getpass.getpass()
```
We can use the `ChatMistralAI.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
```python
from langchain_mistralai import ChatMistralAI
llm = ChatMistralAI(model="mistral-large-latest", temperature=0)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'null', 'type': <ToolType.function: 'function'>, 'function': {'name': 'Multiply', 'arguments': '{"a": 3, "b": 12}'}}]})
```
We can add a tool parser to extract the tool calls from the generated
message to JSON:
```python
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
```
``` text
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
```
Or back to the original Pydantic class:
```python
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
```
``` text
[Multiply(a=3, b=12)]
```
We can force tool usage by specifying `tool_choice="any"`:
```python
llm_with_tools = llm.bind_tools([Multiply], tool_choice="any")
llm_with_tools.invoke("I don't even want you to use the tool")
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'null', 'type': <ToolType.function: 'function'>, 'function': {'name': 'Multiply', 'arguments': '{"a": 5, "b": 7}'}}]})
```
For more see the [ChatMistralAI API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI).
</TabItem>
<TabItem value="together" label="Together">
Since TogetherAI is a drop-in replacement for OpenAI, we can just use
the OpenAI integration.
Install dependencies and set API keys:
```python
%pip install -qU langchain-openai
```
```python
os.environ["TOGETHER_API_KEY"] = getpass.getpass()
```
We can use the `ChatOpenAI.bind_tools()` method to handle converting
`Multiply` to a valid function schema and binding it to the model (i.e.,
passing it in each time the model is invoked).
```python
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.together.xyz/v1",
api_key=os.environ["TOGETHER_API_KEY"],
model="mistralai/Mixtral-8x7B-Instruct-v0.1",
)
llm_with_tools = llm.bind_tools([Multiply])
llm_with_tools.invoke("what's 3 * 12")
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_4tc61dp0478zafqe33hfriee', 'function': {'arguments': '{"a":3,"b":12}', 'name': 'Multiply'}, 'type': 'function'}]})
```
We can add a tool parser to extract the tool calls from the generated
message to JSON:
```python
from langchain_core.output_parsers.openai_tools import JsonOutputToolsParser
tool_chain = llm_with_tools | JsonOutputToolsParser()
tool_chain.invoke("what's 3 * 12")
```
``` text
[{'type': 'Multiply', 'args': {'a': 3, 'b': 12}}]
```
Or back to the original Pydantic class:
```python
from langchain_core.output_parsers.openai_tools import PydanticToolsParser
tool_chain = llm_with_tools | PydanticToolsParser(tools=[Multiply])
tool_chain.invoke("what's 3 * 12")
```
``` text
[Multiply(a=3, b=12)]
```
If we wanted to force that a tool is used (and that it is used only
once), we can set the `tool_choice` argument:
```python
llm_with_multiply = llm.bind_tools([Multiply], tool_choice="Multiply")
llm_with_multiply.invoke(
"make up some numbers if you really want but I'm not forcing you"
)
```
``` text
AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6k6d0gr3jhqil2kqf7sgeusl', 'function': {'arguments': '{"a":5,"b":7}', 'name': 'Multiply'}, 'type': 'function'}]})
```
For more see the [ChatOpenAI API
reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.bind_tools).
</TabItem>
</Tabs>
## Defining functions schemas
In case you need to access function schemas directly, LangChain has a built-in converter that can turn
Python functions, Pydantic classes, and LangChain Tools into the OpenAI format JSON schema:
### Python function
```python
import json
from langchain_core.utils.function_calling import convert_to_openai_tool
def multiply(a: int, b: int) -> int:
"""Multiply two integers together.
Args:
a: First integer
b: Second integer
"""
return a * b
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
```
``` text
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"type": "integer",
"description": "First integer"
},
"b": {
"type": "integer",
"description": "Second integer"
}
},
"required": [
"a",
"b"
]
}
}
}
```
### Pydantic class
```python
from langchain_core.pydantic_v1 import BaseModel, Field
class multiply(BaseModel):
"""Multiply two integers together."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
print(json.dumps(convert_to_openai_tool(multiply), indent=2))
```
``` text
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
```
### LangChain Tool
```python
from typing import Any, Type
from langchain_core.tools import BaseTool
class MultiplySchema(BaseModel):
"""Multiply tool schema."""
a: int = Field(..., description="First integer")
b: int = Field(..., description="Second integer")
class Multiply(BaseTool):
args_schema: Type[BaseModel] = MultiplySchema
name: str = "multiply"
description: str = "Multiply two integers together."
def _run(self, a: int, b: int, **kwargs: Any) -> Any:
return a * b
# Note: we're passing in a Multiply object not the class itself.
print(json.dumps(convert_to_openai_tool(Multiply()), indent=2))
```
``` text
{
"type": "function",
"function": {
"name": "multiply",
"description": "Multiply two integers together.",
"parameters": {
"type": "object",
"properties": {
"a": {
"description": "First integer",
"type": "integer"
},
"b": {
"description": "Second integer",
"type": "integer"
}
},
"required": [
"a",
"b"
]
}
}
}
```
## Next steps
- **Output parsing**: See [OpenAI Tools output
parsers](../../../../docs/modules/model_io/output_parsers/types/openai_tools)
and [OpenAI Functions output
parsers](../../../../docs/modules/model_io/output_parsers/types/openai_functions)
to learn about extracting the function calling API responses into
various formats.
- **Structured output chains**: [Some models have constructors](../../../../docs/guides/structured_output) that
handle creating a structured output chain for you.
- **Tool use**: See how to construct chains and agents that actually
call the invoked tools in [these
guides](../../../../docs/use_cases/tool_use/).
Loading…
Cancel
Save