mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
e6f0cee896
**Description:** Adding async methods to booth OllamaLLM and ChatOllama to enable async streaming and async .on_llm_new_token callbacks. **Issue:** ChatOllama is not working in combination with an AsyncCallbackManager because the .on_llm_new_token method is not awaited. |
||
---|---|---|
.. | ||
__init__.py | ||
anthropic.py | ||
anyscale.py | ||
azure_openai.py | ||
azureml_endpoint.py | ||
baichuan.py | ||
baidu_qianfan_endpoint.py | ||
bedrock.py | ||
cohere.py | ||
databricks.py | ||
ernie.py | ||
everlyai.py | ||
fake.py | ||
fireworks.py | ||
gigachat.py | ||
google_palm.py | ||
gpt_router.py | ||
huggingface.py | ||
human.py | ||
hunyuan.py | ||
javelin_ai_gateway.py | ||
jinachat.py | ||
konko.py | ||
litellm.py | ||
meta.py | ||
minimax.py | ||
mlflow_ai_gateway.py | ||
mlflow.py | ||
ollama.py | ||
openai.py | ||
pai_eas_endpoint.py | ||
promptlayer_openai.py | ||
tongyi.py | ||
vertexai.py | ||
volcengine_maas.py | ||
yandex.py |