function calling

pull/348/head
Elvis Saravia 5 months ago
parent a7b690da38
commit 7f7f9eaaa8

@ -0,0 +1,254 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"import openai\n",
"import json\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set the OpenAI API Key"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"openai.api_key = os.environ['OPENAI_API_KEY']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define a Get Completion Function"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"def get_completion(messages, model=\"gpt-3.5-turbo-1106\", temperature=0, max_tokens=300, tools=None):\n",
" response = openai.chat.completions.create(\n",
" model=model,\n",
" messages=messages,\n",
" temperature=temperature,\n",
" max_tokens=max_tokens,\n",
" tools=tools\n",
" )\n",
" return response.choices[0].message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define Dummy Function"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"# Defines a dummy function to get the current weather\n",
"def get_current_weather(location, unit=\"fahrenheit\"):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
" weather = {\n",
" \"location\": location,\n",
" \"temperature\": \"50\",\n",
" \"unit\": unit,\n",
" }\n",
"\n",
" return json.dumps(weather)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Define Functions\n",
"\n",
"As demonstrated in the OpenAI documentation, here is a simple example of how to define the functions that are going to be part of the request. \n",
"\n",
"The descriptions are important because these are passed directly to the LLM and the LLM will use the description to determine whether to use the functions or how to use/call."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
"# define a function as tools\n",
"tools = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
" },\n",
" \"unit\": {\n",
" \"type\": \"string\", \n",
" \"enum\": [\"celsius\", \"fahrenheit\"]},\n",
" },\n",
" \"required\": [\"location\"],\n",
" },\n",
" }, \n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"# define a list of messages\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What is the weather like in London?\"\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_zDFJY7MgSkWAvu8h2cFTvdVm', function=Function(arguments='{\"location\":\"London\",\"unit\":\"celsius\"}', name='get_current_weather'), type='function')])\n"
]
}
],
"source": [
"response = get_completion(messages, tools=tools)\n",
"print(response)"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"location\":\"London\",\"unit\":\"celsius\"}'"
]
},
"execution_count": 27,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.tool_calls[0].function.arguments"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now capture the arguments:"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"args = json.loads(response.tool_calls[0].function.arguments)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"location\": \"London\", \"temperature\": \"50\", \"unit\": \"celsius\"}'"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_current_weather(**args)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "papersql",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -2,8 +2,4 @@
import { Callout } from 'nextra-theme-docs'
In this section, we will cover some advanced and interesting ways we can use prompt engineering to perform useful and more advanced tasks.
<Callout emoji="⚠️">
This section is under heavy development.
</Callout>
In this section, we will cover advanced and interesting ways we can use prompt engineering to perform useful and more advanced tasks with large language models (LLMs).

@ -1,4 +1,5 @@
{
"function_calling": "Function Calling",
"generating": "Generating Data",
"synthetic_rag": "Generating Synthetic Dataset for RAG",
"generating_textbooks": "Tackling Generated Datasets Diversity",

@ -0,0 +1,115 @@
# Function Calling
## Getting Started with Function Calling
Function calling is the ability to reliably connect LLMs to external tools to enable effective tool usage and interaction with external APIs.
LLMs like GPT-4 and GPT-3.5 have been fine-tuned to detect when a function needs to be called and then output JSON containing arguments to call the function. The functions that are being called by function calling will act as tools in your AI application and you can define more than one in a single request.
Function calling is an important ability for building LLM-powered chatbots or agents that need to retrieve context for an LLM or interact with external tools by converting natural language into API calls.
Functional calling enables developers to create:
- conversational agents that can efficiently use external tools to answer questions. For example, the query "What is the weather like in Belize?" will be converted to a function call such as `get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')`
- LLM-powered solutions for extracting and tagging data (e.g., extracting people names from a Wikipedia article)
- applications that can help convert natural language to API calls or valid database queries
- conversational knowledge retrieval engines that interact with a knowledge base
In this guide, we demonstrate how to prompt models like GPT-4 and open-source models to perform function calling for different use cases.
## Function Calling with GPT-4
As a basic example, let's say we asked the model to check the weather in a given location.
The LLM alone would not be able to respond to this request because it has been trained on a dataset with a cutoff point. The way to solve this is to combine the LLM with an external tool. You can leverage the function calling capabilities of the model to determine an external function to call along with its arguments and then have it return a final response. Below is a simple example of how you can achieve this using the OpenAI APIs.
Let's say a user is asking the following question to the model:
```
What is the weather like in London?
```
To handle this request using function calling, the first step is to define a weather function or set of functions that you will be passing as part of the OpenAI API request:
```python
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
```
The `get_current_weather` function returns the current weather in a given location. When you pass this function definition as part of the request, it doesn't actually executes a function, it just returns a JSON object containing the arguments needed to call the function. Here are some code snippets of how to achieve this.
You can define a completion function as follows:
```python
def get_completion(messages, model="gpt-3.5-turbo-1106", temperature=0, max_tokens=300, tools=None):
response = openai.chat.completions.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
tools=tools
)
return response.choices[0].message
```
This is how you can compose the user question:
```python
messages = [
{
"role": "user",
"content": "What is the weather like in London?"
}
]
```
Finally, you can call the `get_completion` above and passing both the `messages` and `tools`:
```python
response = get_completion(messages, tools=tools)
```
The `response` object contains the following:
```python
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='...', function=Function(arguments='{"location":"London","unit":"celsius"}', name='get_current_weather'), type='function')])
```
In particular, the `arguments` object contains the important arguments extracted by the model and that will be needed to complete the request.
You can then choose to call an external weather API for the actual weather. Once you have the weather information available you can pass it back to the model to summarize a final response given the original user question.
## Function Calling with Open-Source LLMs
More notes coming soon...
## Function Calling Use Cases
More notes coming soon...
## References
- [Fireworks Raises the Quality Bar with Function Calling Model and API Release](https://blog.fireworks.ai/fireworks-raises-the-quality-bar-with-function-calling-model-and-api-release-e7f49d1e98e9)
- [Benchmarking Agent Tool Use and Function Calling](https://blog.langchain.dev/benchmarking-agent-tool-use/)
- [Function Calling](https://ai.google.dev/docs/function_calling)
- [Interacting with APIs](https://python.langchain.com/docs/use_cases/apis)
- [OpenAI's Function Calling](https://platform.openai.com/docs/guides/function-calling)
- [How to call functions with chat models](https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models)
- [Pushing ChatGPT's Structured Data Support To Its Limits](https://minimaxir.com/2023/12/chatgpt-structured-data/)

@ -23,6 +23,8 @@ The following are the latest papers (sorted by release date) on prompt engineeri
## Approaches
- [Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4
](https://arxiv.org/abs/2312.16171v1)
- [Large Language Models as Analogical Reasoners](https://arxiv.org/abs/2310.01714) (October 2023)
- [Query-Dependent Prompt Evaluation and Optimization with Offline Inverse RL](https://arxiv.org/abs/2309.06653) (September 2023)
- [Chain-of-Verification Reduces Hallucination in Large Language Models](https://arxiv.org/abs/2309.11495) (September 2023)

Loading…
Cancel
Save