langchain/libs/community/tests/unit_tests/llms
Micah Parker 6543e585a5
community[patch]: Added support for Ollama's num_predict option in ChatOllama (#16633)
Just a simple default addition to the options payload for a ollama
generate call to support a max_new_tokens parameter.

Should fix issue: https://github.com/langchain-ai/langchain/issues/14715
2024-01-26 15:00:19 -08:00
..
__init__.py
fake_chat_model.py
fake_llm.py
konko.py community[minor]: Adding Konko Completion endpoint (#15570) 2024-01-23 18:22:32 -08:00
test_ai21.py
test_aleph_alpha.py
test_anyscale.py
test_bedrock.py community[minor]: Bedrock async methods (#12477) 2024-01-22 14:44:49 -08:00
test_callbacks.py
test_cerebriumai.py
test_fireworks.py
test_forefrontai.py
test_gooseai.py
test_gradient_ai.py
test_imports.py community[minor]: Add OCI Generative AI integration (#16548) 2024-01-24 18:23:50 -08:00
test_loading.py
test_minimax.py
test_oci_generative_ai.py community[minor]: Add OCI Generative AI integration (#16548) 2024-01-24 18:23:50 -08:00
test_oci_model_deployment_endpoint.py community: add OCI Endpoint (#14250) 2023-12-20 11:52:20 -08:00
test_ollama.py community[patch]: Added support for Ollama's num_predict option in ChatOllama (#16633) 2024-01-26 15:00:19 -08:00
test_openai.py openai[minor]: implement langchain-openai package (#15503) 2024-01-05 15:03:28 -08:00
test_pipelineai.py Refactor: use SecretStr for PipelineAI llms (#15120) 2023-12-26 13:00:58 -08:00
test_stochasticai.py Refactor: use SecretStr for StochasticAI llms (#15118) 2023-12-26 12:59:51 -08:00
test_symblai_nebula.py
test_together.py
test_utils.py
test_watsonxllm.py