mirror of
https://github.com/hwchase17/langchain
synced 2024-11-08 07:10:35 +00:00
87e502c6bc
Co-authored-by: jacoblee93 <jacoblee93@gmail.com> Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
379 lines
8.7 KiB
Plaintext
379 lines
8.7 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "cbe47c3a",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Serialization\n",
|
|
"This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e4a8a447",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Saving a chain to disk\n",
|
|
"First, let's go over how to save a chain to disk. This can be done with the `.save` method, and specifying a file path with a json or yaml extension."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "26e28451",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain import PromptTemplate, OpenAI, LLMChain\n",
|
|
"\n",
|
|
"template = \"\"\"Question: {question}\n",
|
|
"\n",
|
|
"Answer: Let's think step by step.\"\"\"\n",
|
|
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n",
|
|
"llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "bfa18e1f",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_chain.save(\"llm_chain.json\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ea82665d",
|
|
"metadata": {},
|
|
"source": [
|
|
"Let's now take a look at what's inside this saved file"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "0fd33328",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\r\n",
|
|
" \"memory\": null,\r\n",
|
|
" \"verbose\": true,\r\n",
|
|
" \"prompt\": {\r\n",
|
|
" \"input_variables\": [\r\n",
|
|
" \"question\"\r\n",
|
|
" ],\r\n",
|
|
" \"output_parser\": null,\r\n",
|
|
" \"template\": \"Question: {question}\\n\\nAnswer: Let's think step by step.\",\r\n",
|
|
" \"template_format\": \"f-string\"\r\n",
|
|
" },\r\n",
|
|
" \"llm\": {\r\n",
|
|
" \"model_name\": \"text-davinci-003\",\r\n",
|
|
" \"temperature\": 0.0,\r\n",
|
|
" \"max_tokens\": 256,\r\n",
|
|
" \"top_p\": 1,\r\n",
|
|
" \"frequency_penalty\": 0,\r\n",
|
|
" \"presence_penalty\": 0,\r\n",
|
|
" \"n\": 1,\r\n",
|
|
" \"best_of\": 1,\r\n",
|
|
" \"request_timeout\": null,\r\n",
|
|
" \"logit_bias\": {},\r\n",
|
|
" \"_type\": \"openai\"\r\n",
|
|
" },\r\n",
|
|
" \"output_key\": \"text\",\r\n",
|
|
" \"_type\": \"llm_chain\"\r\n",
|
|
"}"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"!cat llm_chain.json"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "2012c724",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Loading a chain from disk\n",
|
|
"We can load a chain from disk by using the `load_chain` method."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "342a1974",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from langchain.chains import load_chain"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "394b7da8",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"chain = load_chain(\"llm_chain.json\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "20d99787",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
|
|
"Prompt after formatting:\n",
|
|
"\u001b[32;1m\u001b[1;3mQuestion: whats 2 + 2\n",
|
|
"\n",
|
|
"Answer: Let's think step by step.\u001b[0m\n",
|
|
"\n",
|
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"' 2 + 2 = 4'"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.run(\"whats 2 + 2\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "14449679",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Saving components separately\n",
|
|
"In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify `llm_path` instead of the `llm` component, and `prompt_path` instead of the `prompt` component."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "50ec35ab",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_chain.prompt.save(\"prompt.json\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "c48b39aa",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\r\n",
|
|
" \"input_variables\": [\r\n",
|
|
" \"question\"\r\n",
|
|
" ],\r\n",
|
|
" \"output_parser\": null,\r\n",
|
|
" \"template\": \"Question: {question}\\n\\nAnswer: Let's think step by step.\",\r\n",
|
|
" \"template_format\": \"f-string\"\r\n",
|
|
"}"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"!cat prompt.json"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "13c92944",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"llm_chain.llm.save(\"llm.json\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "1b815f89",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\r\n",
|
|
" \"model_name\": \"text-davinci-003\",\r\n",
|
|
" \"temperature\": 0.0,\r\n",
|
|
" \"max_tokens\": 256,\r\n",
|
|
" \"top_p\": 1,\r\n",
|
|
" \"frequency_penalty\": 0,\r\n",
|
|
" \"presence_penalty\": 0,\r\n",
|
|
" \"n\": 1,\r\n",
|
|
" \"best_of\": 1,\r\n",
|
|
" \"request_timeout\": null,\r\n",
|
|
" \"logit_bias\": {},\r\n",
|
|
" \"_type\": \"openai\"\r\n",
|
|
"}"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"!cat llm.json"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "7e6aa9ab",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = {\n",
|
|
" \"memory\": None,\n",
|
|
" \"verbose\": True,\n",
|
|
" \"prompt_path\": \"prompt.json\",\n",
|
|
" \"llm_path\": \"llm.json\",\n",
|
|
" \"output_key\": \"text\",\n",
|
|
" \"_type\": \"llm_chain\",\n",
|
|
"}\n",
|
|
"import json\n",
|
|
"\n",
|
|
"with open(\"llm_chain_separate.json\", \"w\") as f:\n",
|
|
" json.dump(config, f, indent=2)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "8e959ca6",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"{\r\n",
|
|
" \"memory\": null,\r\n",
|
|
" \"verbose\": true,\r\n",
|
|
" \"prompt_path\": \"prompt.json\",\r\n",
|
|
" \"llm_path\": \"llm.json\",\r\n",
|
|
" \"output_key\": \"text\",\r\n",
|
|
" \"_type\": \"llm_chain\"\r\n",
|
|
"}"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"!cat llm_chain_separate.json"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "662731c0",
|
|
"metadata": {},
|
|
"source": [
|
|
"We can then load it in the same way"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "d69ceb93",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"chain = load_chain(\"llm_chain_separate.json\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "a99d61b9",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\n",
|
|
"\n",
|
|
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
|
|
"Prompt after formatting:\n",
|
|
"\u001b[32;1m\u001b[1;3mQuestion: whats 2 + 2\n",
|
|
"\n",
|
|
"Answer: Let's think step by step.\u001b[0m\n",
|
|
"\n",
|
|
"\u001b[1m> Finished chain.\u001b[0m\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"' 2 + 2 = 4'"
|
|
]
|
|
},
|
|
"execution_count": 15,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"chain.run(\"whats 2 + 2\")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "822b7c12",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": []
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.11.3"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|