langchain/tests/integration_tests/llms
Ankush Gola caa8e4742e
Enable streaming for OpenAI LLM (#986)
* Support a callback `on_llm_new_token` that users can implement when
`OpenAI.streaming` is set to `True`
2023-02-14 15:06:14 -08:00
..
__init__.py initial commit 2022-10-24 14:51:15 -07:00
test_ai21.py Harrison/cohere experimental (#638) 2023-01-17 22:28:55 -08:00
test_anthropic.py Harrison/athropic (#921) 2023-02-06 22:29:25 -08:00
test_cerebrium.py Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981) 2023-02-13 21:20:19 -08:00
test_cohere.py Harrison/llm saving (#331) 2022-12-13 06:46:01 -08:00
test_forefrontai.py Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981) 2023-02-13 21:20:19 -08:00
test_gooseai.py Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981) 2023-02-13 21:20:19 -08:00
test_huggingface_endpoint.py Harrison/inference endpoint (#861) 2023-02-06 18:14:25 -08:00
test_huggingface_hub.py Harrison/llm saving (#331) 2022-12-13 06:46:01 -08:00
test_huggingface_pipeline.py Harrison/version 0040 (#366) 2022-12-17 07:53:22 -08:00
test_manifest.py Harrison/fix lint (#138) 2022-11-14 08:55:59 -08:00
test_nlpcloud.py Harrison/llm saving (#331) 2022-12-13 06:46:01 -08:00
test_openai.py Enable streaming for OpenAI LLM (#986) 2023-02-14 15:06:14 -08:00
test_petals.py Add GooseAI, CerebriumAI, Petals, ForefrontAI (#981) 2023-02-13 21:20:19 -08:00
test_promptlayer_openai.py Harrison/llm integrations (#1039) 2023-02-13 22:06:25 -08:00
utils.py Harrison/improve cache (#368) 2022-12-18 16:22:42 -05:00