- **Description:** Tongyi uses different client for chat model and
vision model. This PR chooses proper client based on model name to
support both chat model and vision model. Reference [tongyi
document](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api?spm=a2c4g.11186623.0.0.27404c9a7upm11)
for details.
```
from langchain_core.messages import HumanMessage
from langchain_community.chat_models import ChatTongyi
llm = ChatTongyi(model_name='qwen-vl-max')
image_message = {
"image": "https://lilianweng.github.io/posts/2023-06-23-agent/agent-overview.png"
}
text_message = {
"text": "summarize this picture",
}
message = HumanMessage(content=[text_message, image_message])
llm.invoke([message])
```
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None
- Description: callback on_llm_new_token before yield chunk for
_stream/_astream for some chat models, make all chat models in a
consistent behaviour.
- Issue: N/A
- Dependencies: N/A
Previously, if this did not find a mypy cache then it wouldnt run
this makes it always run
adding mypy ignore comments with existing uncaught issues to unblock other prs
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
- **Description:** This PR is to fix a bug of "system message check" in
langchain_community/ chat_models/tongyi.py
- **Issue:** In term of current logic, if there's no system message in
the chat messages, an error of "System message can only be the first
message." will be wrongly raised.
- **Dependencies:** No.
- **Twitter handle:** I don't have a Twitter account.
- **Description:** fixes and upgrades for the Tongyi LLM and ChatTongyi
Model
- Fixed typos; it should be `Tongyi`, not `OpenAI`.
- Fixed a bug in `stream_generate_with_retry`; it's a real stream
generator now.
- Fixed a bug in `validate_environment`; the `dashscope_api_key` should
be properly handled when set by environment variables or initialization
parameters.
- Changed the `dashscope` response to incremental output by setting the
parameter `incremental_output`, which eliminates the need for the
prefix-removal trick.
- Removed some unused parameters, like `n`, `prefix_messages`.
- Added `_stream` method.
- Added async methods support, such as `_astream`, `_agenerate`,
`_abatch`.
- **Dependencies:** No new dependencies.
- **Tag maintainer:** @hwchase17
> PS: Some may be confused about the terms `dashscope`, `tongyi`, and
`Qwen`:
> - `dashscope`: A platform to deploy LLMs and provide APIs to invoke
the LLM.
> - `tongyi`: A brand name or overall term about Alibaba Cloud's LLM/AI.
> - `Qwen`: An LLM that is open-sourced and deployed in `dashscope`.
>
> We use the `dashscope` SDK to interact with the `tongyi`-`Qwen` LLM.
---------
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Addded missed docstrings. Fixed inconsistency in docstrings.
**Note** CC @efriis
There were PR errors on
`langchain_experimental/prompt_injection_identifier/hugging_face_identifier.py`
But, I didn't touch this file in this PR! Can it be some cache problems?
I fixed this error.