mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
6543e585a5
Just a simple default addition to the options payload for a ollama generate call to support a max_new_tokens parameter. Should fix issue: https://github.com/langchain-ai/langchain/issues/14715 |
||
---|---|---|
.. | ||
__init__.py | ||
fake_chat_model.py | ||
fake_llm.py | ||
konko.py | ||
test_ai21.py | ||
test_aleph_alpha.py | ||
test_anyscale.py | ||
test_bedrock.py | ||
test_callbacks.py | ||
test_cerebriumai.py | ||
test_fireworks.py | ||
test_forefrontai.py | ||
test_gooseai.py | ||
test_gradient_ai.py | ||
test_imports.py | ||
test_loading.py | ||
test_minimax.py | ||
test_oci_generative_ai.py | ||
test_oci_model_deployment_endpoint.py | ||
test_ollama.py | ||
test_openai.py | ||
test_pipelineai.py | ||
test_stochasticai.py | ||
test_symblai_nebula.py | ||
test_together.py | ||
test_utils.py | ||
test_watsonxllm.py |