langchain/docs/modules/models/llms/examples
Michael Landis f75f0dbad6
docs: improve flow of llm caching notebook (#5309)
# docs: improve flow of llm caching notebook

The notebook `llm_caching` demos various caching providers. In the
previous version, there was setup common to all examples but under the
`In Memory Caching` heading.

If a user comes and only wants to try a particular example, they will
run the common setup, then the cells for the specific provider they are
interested in. Then they will get import and variable reference errors.
This commit moves the common setup to the top to avoid this.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@dev2049
2023-05-26 13:34:11 -04:00
..
async_llm.ipynb add async support for anthropic (#2114) 2023-03-28 22:49:14 -04:00
custom_llm.ipynb Callbacks Refactor [base] (#3256) 2023-04-30 11:14:09 -07:00
fake_llm.ipynb bump version to 131 (#2391) 2023-04-04 07:21:50 -07:00
human_input_llm.ipynb docs: fix minor typo + add wikipedia package installation part in human_input_llm.ipynb (#5118) 2023-05-23 10:59:30 -07:00
llm_caching.ipynb docs: improve flow of llm caching notebook (#5309) 2023-05-26 13:34:11 -04:00
llm_serialization.ipynb Minor text correction (#2298) 2023-04-02 13:54:42 -07:00
llm.json
llm.yaml
streaming_llm.ipynb fix json saving, update docs to reference anthropic chat model (#4364) 2023-05-08 15:30:52 -07:00
token_usage_tracking.ipynb Add easy print method to openai callback (#2848) 2023-04-13 11:28:42 -07:00