langchain/docs/modules/models/llms/examples
Jonathan Page 8441cbfc03
Add successful request count to OpenAI callback (#2128)
I've found it useful to track the number of successful requests to
OpenAI. This gives me a better sense of the efficiency of my prompts and
helps compare map_reduce/refine on a cheaper model vs. stuffing on a
more expensive model with higher capacity.
2023-03-28 22:56:17 -07:00
..
async_llm.ipynb add async support for anthropic (#2114) 2023-03-28 22:49:14 -04:00
custom_llm.ipynb big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
fake_llm.ipynb big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
llm_caching.ipynb big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
llm_serialization.ipynb big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
llm.json big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
llm.yaml big docs refactor (#1978) 2023-03-26 19:49:46 -07:00
streaming_llm.ipynb enable streaming in anthropic llm wrapper (#2065) 2023-03-27 20:25:00 -04:00
token_usage_tracking.ipynb Add successful request count to OpenAI callback (#2128) 2023-03-28 22:56:17 -07:00