forked from Archives/langchain
docs: improve flow of llm caching notebook (#5309)
# docs: improve flow of llm caching notebook The notebook `llm_caching` demos various caching providers. In the previous version, there was setup common to all examples but under the `In Memory Caching` heading. If a user comes and only wants to try a particular example, they will run the common setup, then the cells for the specific provider they are interested in. Then they will get import and variable reference errors. This commit moves the common setup to the top to avoid this. ## Who can review? Community members can review the PR once tests pass. Tag maintainers/contributors who might be interested: @dev2049
This commit is contained in:
parent
0a8d6bc402
commit
f75f0dbad6
@ -16,10 +16,15 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"from langchain.llms import OpenAI"
|
"import langchain\n",
|
||||||
|
"from langchain.llms import OpenAI\n",
|
||||||
|
"\n",
|
||||||
|
"# To make the caching really obvious, lets use a slower model.\n",
|
||||||
|
"llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
"attachments": {},
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "b50f0598",
|
"id": "b50f0598",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -34,22 +39,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import langchain\n",
|
|
||||||
"from langchain.cache import InMemoryCache\n",
|
"from langchain.cache import InMemoryCache\n",
|
||||||
"langchain.llm_cache = InMemoryCache()"
|
"langchain.llm_cache = InMemoryCache()"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": 6,
|
|
||||||
"id": "f69f6283",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# To make the caching really obvious, lets use a slower model.\n",
|
|
||||||
"llm = OpenAI(model_name=\"text-davinci-002\", n=2, best_of=2)"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 4,
|
"execution_count": 4,
|
||||||
|
Loading…
Reference in New Issue
Block a user