docs: add vertexai to structured output (#20171)

pull/20179/head
Bagatur 6 months ago committed by GitHub
parent a812839f0c
commit 1af7133828
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -66,7 +66,9 @@
"source": [
"## OpenAI\n",
"\n",
"OpenAI exposes a few different ways to get structured outputs."
"OpenAI exposes a few different ways to get structured outputs. \n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output)"
]
},
{
@ -96,8 +98,8 @@
"metadata": {},
"outputs": [],
"source": [
"model = ChatOpenAI()\n",
"model_with_structure = model.with_structured_output(Joke)"
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)\n",
"structured_llm = model.with_structured_output(Joke)"
]
},
{
@ -118,7 +120,7 @@
}
],
"source": [
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@ -138,7 +140,7 @@
"metadata": {},
"outputs": [],
"source": [
"model_with_structure = model.with_structured_output(Joke, method=\"json_mode\")"
"structured_llm = model.with_structured_output(Joke, method=\"json_mode\")"
]
},
{
@ -159,7 +161,7 @@
}
],
"source": [
"model_with_structure.invoke(\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
")"
]
@ -171,7 +173,9 @@
"source": [
"## Fireworks\n",
"\n",
"[Fireworks](https://fireworks.ai/) similarly supports function calling and JSON mode for select models."
"[Fireworks](https://fireworks.ai/) similarly supports function calling and JSON mode for select models.\n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_fireworks.chat_models.ChatFireworks.html#langchain_fireworks.chat_models.ChatFireworks.with_structured_output)"
]
},
{
@ -202,7 +206,7 @@
"outputs": [],
"source": [
"model = ChatFireworks(model=\"accounts/fireworks/models/firefunction-v1\")\n",
"model_with_structure = model.with_structured_output(Joke)"
"structured_llm = model.with_structured_output(Joke)"
]
},
{
@ -223,7 +227,7 @@
}
],
"source": [
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@ -243,7 +247,7 @@
"metadata": {},
"outputs": [],
"source": [
"model_with_structure = model.with_structured_output(Joke, method=\"json_mode\")"
"structured_llm = model.with_structured_output(Joke, method=\"json_mode\")"
]
},
{
@ -264,7 +268,7 @@
}
],
"source": [
"model_with_structure.invoke(\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about dogs, respond in JSON with `setup` and `punchline` keys\"\n",
")"
]
@ -276,7 +280,9 @@
"source": [
"## Mistral\n",
"\n",
"We also support structured output with Mistral models, although we only support function calling."
"We also support structured output with Mistral models, although we only support function calling.\n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html#langchain_mistralai.chat_models.ChatMistralAI.with_structured_output)"
]
},
{
@ -297,7 +303,7 @@
"outputs": [],
"source": [
"model = ChatMistralAI(model=\"mistral-large-latest\")\n",
"model_with_structure = model.with_structured_output(Joke)"
"structured_llm = model.with_structured_output(Joke)"
]
},
{
@ -307,7 +313,7 @@
"metadata": {},
"outputs": [],
"source": [
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@ -344,7 +350,7 @@
" api_key=os.environ[\"TOGETHER_API_KEY\"],\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
")\n",
"model_with_structure = model.with_structured_output(Joke)"
"structured_llm = model.with_structured_output(Joke)"
]
},
{
@ -365,7 +371,7 @@
}
],
"source": [
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@ -375,7 +381,9 @@
"source": [
"## Groq\n",
"\n",
"Groq provides an OpenAI-compatible function calling API"
"Groq provides an OpenAI-compatible function calling API.\n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_groq.chat_models.ChatGroq.html#langchain_groq.chat_models.ChatGroq.with_structured_output)"
]
},
{
@ -415,7 +423,7 @@
],
"source": [
"model = ChatGroq()\n",
"model_with_structure = model.with_structured_output(Joke)"
"structured_llm = model.with_structured_output(Joke)"
]
},
{
@ -436,7 +444,7 @@
}
],
"source": [
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
@ -456,7 +464,7 @@
"metadata": {},
"outputs": [],
"source": [
"model_with_structure = model.with_structured_output(Joke, method=\"json_mode\")"
"structured_llm = model.with_structured_output(Joke, method=\"json_mode\")"
]
},
{
@ -477,7 +485,7 @@
}
],
"source": [
"model_with_structure.invoke(\n",
"structured_llm.invoke(\n",
" \"Tell me a joke about cats, respond in JSON with `setup` and `punchline` keys\"\n",
")"
]
@ -489,7 +497,9 @@
"source": [
"## Anthropic\n",
"\n",
"Anthropic's tool-calling API can be used for structuring outputs. Note that there is currently no way to force a tool-call via the API, so prompting the model correctly is still important."
"Anthropic's tool-calling API can be used for structuring outputs. Note that there is currently no way to force a tool-call via the API, so prompting the model correctly is still important.\n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_anthropic.chat_models.ChatAnthropic.html#langchain_anthropic.chat_models.ChatAnthropic.with_structured_output)"
]
},
{
@ -512,19 +522,54 @@
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatAnthropic(\n",
" model=\"claude-3-opus-20240229\",\n",
")\n",
"model_with_structure = model.with_structured_output(Joke)\n",
"model_with_structure.invoke(\"Tell me a joke about cats\")"
"model = ChatAnthropic(model=\"claude-3-opus-20240229\", temperature=0)\n",
"structured_llm = model.with_structured_output(Joke)\n",
"structured_llm.invoke(\"Tell me a joke about cats. Make sure to call the Joke function.\")"
]
},
{
"cell_type": "markdown",
"id": "6c797e2d-3115-4ca2-9c2f-e853bdc7956d",
"metadata": {},
"source": [
"# Vertex AI\n",
"\n",
"Google's Gemini models support [function-calling](https://ai.google.dev/docs/function_calling), which we can access via Vertex AI and use for structuring outputs.\n",
"\n",
"[API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html#langchain_google_vertexai.chat_models.ChatVertexAI.with_structured_output)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "24421189-02bf-4589-a91a-197584c4a696",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Joke(setup='A cat-ch', punchline='What do you call a cat that loves to play fetch?')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(model=\"gemini-pro\", temperature=0)\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv-2",
"language": "python",
"name": "python3"
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {

Loading…
Cancel
Save