mirror of
https://github.com/hwchase17/langchain
synced 2024-11-11 19:11:02 +00:00
5b1de2ae93
# Fix streaming in mistral with ainvoke - [x] **PR title** - [x] **PR message** - [x] **Add tests and docs**: 1. [x] Added a test for the fixed integration. 2. [x] An example notebook showing its use. It lives in `docs/docs/integrations` directory. - [x] **Lint and test**: Ran `make format`, `make lint` and `make test` from the root of the package(s) I've modified. Hello * I Identified an issue in the mistral package where the callback streaming (see on_llm_new_token) was not functioning correctly when the streaming parameter was set to True and call with `ainvoke`. * The root cause of the problem was the streaming not taking into account. ( I think it's an oversight ) * To resolve the issue, I added the `streaming` attribut. * Now, the callback with streaming works as expected when the streaming parameter is set to True. ## How to reproduce ``` from langchain_mistralai.chat_models import ChatMistralAI chain = ChatMistralAI(streaming=True) # Add a callback chain.ainvoke(..) # Oberve on_llm_new_token # Now, the callback is given as streaming tokens, before it was in grouped format. ``` Co-authored-by: Erick Friis <erick@langchain.dev> |
||
---|---|---|
.. | ||
langchain_mistralai | ||
scripts | ||
tests | ||
.gitignore | ||
LICENSE | ||
Makefile | ||
poetry.lock | ||
pyproject.toml | ||
README.md |
langchain-mistralai
This package contains the LangChain integrations for MistralAI through their mistralai SDK.
Installation
pip install -U langchain-mistralai
Chat Models
This package contains the ChatMistralAI
class, which is the recommended way to interface with MistralAI models.
To use, install the requirements, and configure your environment.
export MISTRAL_API_KEY=your-api-key
Then initialize
from langchain_core.messages import HumanMessage
from langchain_mistralai.chat_models import ChatMistralAI
chat = ChatMistralAI(model="mistral-small")
messages = [HumanMessage(content="say a brief hello")]
chat.invoke(messages)
ChatMistralAI
also supports async and streaming functionality:
# For async...
await chat.ainvoke(messages)
# For streaming...
for chunk in chat.stream(messages):
print(chunk.content, end="", flush=True)
Embeddings
With MistralAIEmbeddings
, you can directly use the default model 'mistral-embed', or set a different one if available.
Choose model
embedding.model = 'mistral-embed'
Simple query
res_query = embedding.embed_query("The test information")
Documents
res_document = embedding.embed_documents(["test1", "another test"])