You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
langchain/cookbook/openai_v1_cookbook.ipynb

232 lines
7.5 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "f970f757-ec76-4bf0-90cd-a2fb68b945e3",
"metadata": {},
"source": [
"# Exploring OpenAI V1 functionality\n",
"\n",
"On 11.06.23 OpenAI released a number of new features, and along with it bumped their Python SDK to 1.0.0. This notebook shows off the new features and how to use them with LangChain."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee897729-263a-4073-898f-bb4cf01ed829",
"metadata": {},
"outputs": [],
"source": [
"!pip install \"openai>=1\""
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c3e067ce-7a43-47a7-bc89-41f1de4cf136",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema.messages import HumanMessage, SystemMessage"
]
},
{
"cell_type": "markdown",
"id": "fa7e7e95-90a1-4f73-98fe-10c4b4e0951b",
"metadata": {},
"source": [
"## [Vision](https://platform.openai.com/docs/guides/vision)\n",
"\n",
"OpenAI released multi-modal models, which can take a sequence of text and images as input."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1c8c3965-d3c9-4186-b5f3-5e67855ef916",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The image appears to be a diagram representing the architecture or components of a software platform named \"LangChain.\" This diagram outlines various layers and elements of the platform, which seems to be related to language processing or computational linguistics, as suggested by the context clues in the names of the components.\\n\\nHere\\'s a breakdown of the components shown:\\n\\n- **LangSmith**: This seems to be a tool or suite related to testing, evaluation, monitoring, feedback, and annotation within the platform.\\n\\n- **LangServe**: This could represent a service layer that exposes the platform\\'s capabilities as REST API endpoints.\\n\\n- **Templates**: These are likely reference applications provided as starting points or examples for users of the platform.\\n\\n- **Chains, agents, agent executors**: This section describes the common application logic, perhaps indicating that the platform uses a chain of agents or processes to execute tasks.\\n\\n- **Model I/O**: This includes the components related to input/output processing for a model, like prompt, example selector, model, and output parser.\\n\\n- **Retrieval**: These components are involved in retrieving documents, splitting text, and managing embeddings and vector stores, which are important for tasks like search and information retrieval.\\n\\n- **Agent tooling**: This might refer to the tools used for creating,')"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatOpenAI(model=\"gpt-4-vision-preview\", max_tokens=256)\n",
"chat.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"What is this image showing\"},\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": \"https://python.langchain.com/assets/images/langchain_stack-da369071b058555da3d491a695651f15.jpg\",\n",
" \"detail\": \"auto\",\n",
" },\n",
" },\n",
" ]\n",
" )\n",
" ]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "71c34763-d1e7-4b9a-a9d7-3e4cc0dfc2c4",
"metadata": {},
"source": [
"## [JSON mode](https://platform.openai.com/docs/guides/text-generation/json-mode)\n",
"\n",
"Constrain the model to only generate valid JSON. Note that you must include a system message with instructions to use JSON for this mode to work.\n",
"\n",
"Only works with certain models. "
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "db6072c4-f3f3-415d-872b-71ea9f3c02bb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"companies\": [\n",
" {\n",
" \"name\": \"Google\",\n",
" \"origin\": \"USA\"\n",
" },\n",
" {\n",
" \"name\": \"Deepmind\",\n",
" \"origin\": \"UK\"\n",
" }\n",
" ]\n",
"}\n"
]
}
],
"source": [
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\").bind(\n",
" response_format={\"type\": \"json_object\"}\n",
")\n",
"\n",
"output = chat.invoke(\n",
" [\n",
" SystemMessage(\n",
" content=\"Extract the 'name' and 'origin' of any companies mentioned in the following statement. Return a JSON list.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Google was founded in the USA, while Deepmind was founded in the UK\"\n",
" ),\n",
" ]\n",
")\n",
"print(output.content)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "08e00ccf-b991-4249-846b-9500a0ccbfa0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'companies': [{'name': 'Google', 'origin': 'USA'},\n",
" {'name': 'Deepmind', 'origin': 'UK'}]}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import json\n",
"\n",
"json.loads(output.content)"
]
},
{
"cell_type": "markdown",
"id": "aa9a94d9-4319-4ab7-a979-c475ce6b5f50",
"metadata": {},
"source": [
"## [System fingerprint](https://platform.openai.com/docs/guides/text-generation/reproducible-outputs)\n",
"\n",
"OpenAI sometimes changes model configurations in a way that impacts outputs. Whenever this happens, the system_fingerprint associated with a generation will change."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1281883c-bf8f-4665-89cd-4f33ccde69ab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'token_usage': {'completion_tokens': 43, 'prompt_tokens': 49, 'total_tokens': 92}, 'model_name': 'gpt-3.5-turbo-1106', 'system_fingerprint': 'fp_eeff13170a'}\n"
]
}
],
"source": [
"chat = ChatOpenAI(model=\"gpt-3.5-turbo-1106\")\n",
"output = chat.generate(\n",
" [\n",
" [\n",
" SystemMessage(\n",
" content=\"Extract the 'name' and 'origin' of any companies mentioned in the following statement. Return a JSON list.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Google was founded in the USA, while Deepmind was founded in the UK\"\n",
" ),\n",
" ]\n",
" ]\n",
")\n",
"print(output.llm_output)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c637ba1-322d-4fc9-b97e-3afa83dc4d72",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}