langchain/docs/modules/models/llms/examples
Ehsan M. Kermani 4a246e2fd6
Allow clearing cache and fix gptcache (#3493)
This PR

* Adds `clear` method for `BaseCache` and implements it for various
caches
* Adds the default `init_func=None` and fixes gptcache integtest
* Since right now integtest is not running in CI, I've verified the
changes by running `docs/modules/models/llms/examples/llm_caching.ipynb`
(until proper e2e integtest is done in CI)
2023-04-26 22:03:50 -07:00
..
async_llm.ipynb add async support for anthropic (#2114) 2023-03-28 22:49:14 -04:00
custom_llm.ipynb big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
fake_llm.ipynb bump version to 131 (#2391) 2023-04-04 07:21:50 -07:00
llm_caching.ipynb Allow clearing cache and fix gptcache (#3493) 2023-04-26 22:03:50 -07:00
llm_serialization.ipynb Minor text correction (#2298) 2023-04-02 13:54:42 -07:00
llm.json big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
llm.yaml big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
streaming_llm.ipynb enable streaming in anthropic llm wrapper (#2065) 2023-03-27 20:25:00 -04:00
token_usage_tracking.ipynb Add easy print method to openai callback (#2848) 2023-04-13 11:28:42 -07:00