Commit Graph

10394 Commits

Author SHA1 Message Date
Leonid Ganeline
716a316654
core: docstrings indexing (#23785)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:27:34 -04:00
Leonid Ganeline
30fdc2dbe7
core: docstrings messages (#23788)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:25:00 -04:00
ccurme
54e730f6e4
fireworks[patch]: read from tool calls attribute (#23820) 2024-07-03 11:11:17 -04:00
Bagatur
e787249af1
docs: fireworks standard page (#23816) 2024-07-03 14:33:05 +00:00
Jacob Lee
27aa4d38bf
docs[patch]: Update structured output docs to have more discussion (#23786)
CC @agola11 @ccurme
2024-07-02 16:53:31 -07:00
Bagatur
ebb404527f
anthropic[patch]: Release 0.1.19 (#23783) 2024-07-02 18:17:25 -04:00
Bagatur
6168c846b2
openai[patch]: Release 0.1.14 (#23782) 2024-07-02 18:17:15 -04:00
Bagatur
cb9812593f
openai[patch]: expose model request payload (#23287)
![Screenshot 2024-06-21 at 3 12 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/6243a01f-1ef6-4085-9160-2844d9f2b683)
2024-07-02 17:43:55 -04:00
Bagatur
ed200bf2c4
anthropic[patch]: expose payload (#23291)
![Screenshot 2024-06-21 at 4 56 02
PM](https://github.com/langchain-ai/langchain/assets/22008038/a2c6224f-3741-4502-9607-1a726a0551c9)
2024-07-02 17:43:47 -04:00
Bagatur
7a3d8e5a99
core[patch]: Release 0.2.11 (#23780) 2024-07-02 17:35:57 -04:00
Bagatur
d677dadf5f
core[patch]: mark RemoveMessage beta (#23656) 2024-07-02 21:27:21 +00:00
ccurme
1d54ac93bb
ai21[patch]: release 0.1.7 (#23781) 2024-07-02 21:24:13 +00:00
Asaf Joseph Gardin
320dc31822
partners: AI21 Labs Jamba Streaming Support (#23538)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Added support for streaming in AI21 Jamba Model
    - **Twitter handle:** https://github.com/AI21Labs


- [x] **Add tests and docs**: If you're adding a new integration, please
include

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 17:15:46 -04:00
Qingchuan Hao
5cd4083457
community: make bing web search as the only option (#23523)
This PR make bing web search as the option for BingSearchAPIWrapper to
facilitate and simply the user interface on Langchain.
This is a follow-up work of
https://github.com/langchain-ai/langchain/pull/23306.
2024-07-02 17:13:54 -04:00
William W Wang
76e7e4e9e6
Update docs: LangChain agent memory (#23673)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:** Update docs content on agent memory

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 17:06:32 -04:00
ccurme
7c1cddf1b7
anthropic[patch]: release 0.1.18 (#23778) 2024-07-02 16:46:47 -04:00
ccurme
c9dac59008
anthropic[patch]: fix model name in some integration tests (#23779) 2024-07-02 20:45:52 +00:00
Bagatur
7a6c06cadd
anthropic[patch]: tool output parser fix (#23647) 2024-07-02 16:33:22 -04:00
ccurme
46cbf0e4aa
anthropic[patch]: use core output parsers for structured output (#23776)
Also add to standard tests for structured output.
2024-07-02 16:15:26 -04:00
kiarina
dc396835ed
langchain_anthropic: add stop_reason in ChatAnthropic stream result (#23689)
`ChatAnthropic` can get `stop_reason` from the resulting `AIMessage` in
`invoke` and `ainvoke`, but not in `stream` and `astream`.
This is a different behavior from `ChatOpenAI`.
It is possible to get `stop_reason` from `stream` as well, since it is
needed to determine the next action after the LLM call. This would be
easier to handle in situations where only `stop_reason` is needed.

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-07-02 15:16:20 -04:00
Bagatur
27ce58f86e
docs: google genai standard page (#23766)
Part of #22296
2024-07-02 13:54:34 -04:00
maang-h
e4e28a6ff5
community[patch]: Fix MiniMaxChat validate_environment error (#23770)
- **Description:** Fix some issues in MiniMaxChat 
  - Fix `minimax_api_host` not in `values` error
- Remove `minimax_group_id` from reading environment variables, the
`minimax_group_id` no longer use in MiniMaxChat
  - Invoke callback prior to yielding token, the issus #16913
2024-07-02 13:23:32 -04:00
SN
acc457f645
core[patch]: fix nested sections for mustache templating (#23747)
The prompt template variable detection only worked for singly-nested
sections because we just kept track of whether we were in a section and
then set that to false as soon as we encountered an end block. i.e. the
following:

```
{{#outerSection}}
    {{variableThatShouldntShowUp}}
    {{#nestedSection}}
        {{nestedVal}}
    {{/nestedSection}}
    {{anotherVariableThatShouldntShowUp}}
{{/outerSection}}
```

Would yield `['outerSection', 'anotherVariableThatShouldntShowUp']` as
input_variables (whereas it should just yield `['outerSection']`). This
fixes that by keeping track of the current depth and using a stack.
2024-07-02 10:20:45 -07:00
Karim Lalani
acc8fb3ead
docs[patch]: Update OllamaFunctions docs to match chat model integration template (#23179)
Added Tool Calling Agent Example with langgraph to OllamaFunctions
documentation
2024-07-02 10:05:44 -07:00
Bagatur
79c07a8ade
docs: standardize bedrock page (#23738)
Part of #22296
2024-07-02 12:03:36 -04:00
Teja Hara
a77a263e24
Added langchain-community installation (#23741)
PR title: Docs enhancement

- Description: Adding installation instructions for integrations
requiring langchain-community package since 0.2
- Issue: https://github.com/langchain-ai/langchain/issues/22005

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-02 11:03:07 -04:00
Eugene Yurtsev
46ff0f7a3c
community[patch]: Update @root_validators to use explicit pre=True or pre=False (#23737) 2024-07-02 10:47:21 -04:00
Igor Drozdov
b664dbcc36
feat(community): add support for tool_calls response (#23765)
When `model_kwargs={"tools": tools}` are passed to `ChatLiteLLM`, they
are executed, but the response is not recognized correctly

Let's add `tool_calls` to the `additional_kwargs`

Thank you for contributing to LangChain!

## ChatAnthropic

I used the following example to verify the output of llm with tools:

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_anthropic import ChatAnthropic

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm = ChatAnthropic(model="claude-3-sonnet-20240229")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
print(ai_msg.tool_calls)
```

I get the following response:

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01UfDA89knrhw3vFV9X47neT'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01NrYVRYae7m7z7tBgyPb3Gd'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01EPFEpDgzL6vV2dTpD9SVP5'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01B5J6tPJXgwwfhQX9BHP2dt'}]
```

## LiteLLM

Based on https://litellm.vercel.app/docs/completion/function_call

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
import litellm

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

response = litellm.completion(model="claude-3-sonnet-20240229", messages=[{'role': 'user', 'content': prompt}], tools=tools)
print(response.choices[0].message.tool_calls)
```

```python
[ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetWeather'), id='toolu_01HeDWV5vP7BDFfytH5FJsja', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetWeather'), id='toolu_01EiLesUSEr3YK1DaE2jxsQv', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetPopulation'), id='toolu_01Xz26zvkBDRxEUEWm9pX6xa', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetPopulation'), id='toolu_01SDqKnsLjvUXuBsgAZdEEpp', type='function')]
```

## ChatLiteLLM

When I try the following

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229", model_kwargs={"tools": tools})
ai_msg = llm.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls)
```

```python
content="Okay, let's find out the current weather and populations for Los Angeles and New York City:" response_metadata={'token_usage': Usage(prompt_tokens=329, completion_tokens=193, total_tokens=522), 'model': 'claude-3-sonnet-20240229', 'finish_reason': 'tool_calls'} id='run-748b7a84-84f4-497e-bba1-320bd4823937-0'
[]
```

---

When I apply the changes of this PR, the output is

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_017D2tGjiaiakB1HadsEFZ4e'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01WrDpJfVqLkPejWzonPCbLW'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_016UKyYrVAV9Pz99iZGgGU7V'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01Sgv1imExFX1oiR1Cw88zKy'}]
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
2024-07-02 10:42:08 -04:00
Eugene Yurtsev
338cef35b4
community[patch]: update @root_validator in utilities namespace (#23768)
Update all utilities to use `pre=True` or `pre=False`

https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 14:33:01 +00:00
wenngong
ee5eedfa04
partners: support reading HuggingFace params from env (#23309)
Description: 
1. partners/HuggingFace module support reading params from env. Not
adjust langchain_community/.../huggingfaceXX modules since they are
deprecated.
  2. pydantic 2 @root_validator migration.

Issue: #22448 #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-02 10:12:45 -04:00
antonpibm
ffde8a6a09
Milvus vectorstore: fix pass ids as argument after upsert (#23761)
**Description**: Milvus vectorstore supports both `add_documents` via
the base class and `upsert` method which deletes and re-adds documents
based on their ids

**Issue**: Due to mismatch in the interfaces the ids used by `upsert`
are neglected in `add_documents`, as `ids` are passed as argument in
`upsert` but via `kwargs` is `add_documents`

This caused exceptions and inconsistency in the DB, tested with
`auto_id=False`

**Fix**: pass `ids` via `kwargs` to `add_documents`
2024-07-02 13:45:30 +00:00
Eugene Yurtsev
d084172b63
community[patch]: root validator set explicit pre=False or pre=True (#23764)
See issue: https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 09:42:05 -04:00
Khelan Modi
4457e64e13
Update azure_cosmos_db for mongodb documentation (#23740)
added pre-filtering documentation

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
    - **Description:** added filter vector search 
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:**: n/a


- [x] **Add tests and docs**: If you're adding a new integration, please
include - No need for tests, just a simple doc update
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 12:53:05 +00:00
panwg3
bc98f90ba3
update wrong words (#23749)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-07-02 08:50:20 -04:00
mattthomps1
cc55823486
docs: updated PPLX model (#23723)
Description: updated pplx docs to reference a currently [supported
model](https://docs.perplexity.ai/docs/model-cards). pplx-70b-online
->llama-3-sonar-small-32k-online

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 08:48:49 -04:00
Bagatur
aa165539f6
docs: standardize cohere page (#23739)
Part of #22296
2024-07-01 19:34:13 -04:00
Jacob Lee
7791d92711
community[patch]: Fix requests alias for load_tools (#23734)
CC @baskaryan
2024-07-01 15:02:14 -07:00
Eugene Yurtsev
f24e38876a
community[patch]: Update root_validators to use explicit pre=True or pre=False (#23736) 2024-07-01 17:13:23 -04:00
Yannick Stephan
5b1de2ae93
mistralai: Fixed streaming in MistralAI with ainvoke and callbacks (#22000)
# Fix streaming in mistral with ainvoke 
- [x] **PR title**
- [x] **PR message**
- [x] **Add tests and docs**:
  1. [x] Added a test for the fixed integration.
2. [x] An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Ran `make format`, `make lint` and `make test`
from the root of the package(s) I've modified.

Hello 

* I Identified an issue in the mistral package where the callback
streaming (see on_llm_new_token) was not functioning correctly when the
streaming parameter was set to True and call with `ainvoke`.
* The root cause of the problem was the streaming not taking into
account. ( I think it's an oversight )
* To resolve the issue, I added the `streaming` attribut.
* Now, the callback with streaming works as expected when the streaming
parameter is set to True.

## How to reproduce

```
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)

# Oberve on_llm_new_token
# Now, the callback is given as streaming tokens, before it was in grouped format.
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 20:53:09 +00:00
Jacob Lee
f4b2e553e7
docs[patch]: Update Unstructured loader notebooks and install instructions (#23726)
CC @baskaryan @MthwRobinson
2024-07-01 13:36:48 -07:00
Eugene Yurtsev
5d2262af34
community[patch]: Update root_validators to use pre=True or pre=False (#23731)
Update root_validators in preparation for pydantic 2 migration.
2024-07-01 20:10:15 +00:00
Erick Friis
6019147b66
infra: filter template check (#23727) 2024-07-01 13:00:33 -07:00
Eugene Yurtsev
ebcee4f610
core[patch]: Add versionadded to get_by_ids (#23728) 2024-07-01 15:16:00 -04:00
Eugene Yurtsev
e800f6bb57
core[minor]: Create BaseMedia object (#23639)
This PR implements a BaseContent object from which Document and Blob
objects will inherit proposed here:
https://github.com/langchain-ai/langchain/pull/23544

Alternative: Create a base object that only has an identifier and no
metadata.

For now decided against it, since that refactor can be done at a later
time. It also feels a bit odd since our IDs are optional at the moment.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 15:07:30 -04:00
Chip Davis
04bc5f1a95
partners[azure]: fix having openai_api_base set for other packages (#22068)
This fix is for #21726. When having other packages installed that
require the `openai_api_base` environment variable, users are not able
to instantiate the AzureChatModels or AzureEmbeddings.

This PR adds a new value `ignore_openai_api_base` which is a bool. When
set to True, it sets `openai_api_base` to `None`

Two new tests were added for the `test_azure` and a new file
`test_azure_embeddings`

A different approach may be better for this. If you can think of better
logic, let me know and I can adjust it.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 18:35:20 +00:00
Nuno Campos
b36e95caa9
core[patch]: use async messages where possible (#23718)
Fix #23716

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 18:33:05 +00:00
Spyros Avlonitis
8cfb2fa1b7
core[minor]: Add maxsize for InMemoryCache (#23405)
This PR introduces a maxsize parameter for the InMemoryCache class,
allowing users to specify the maximum number of items to store in the
cache. If the cache exceeds the specified maximum size, the oldest items
are removed. Additionally, comprehensive unit tests have been added to
ensure all functionalities are thoroughly tested. The tests are written
using pytest and cover both synchronous and asynchronous methods.

Twitter: @spyrosavl

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 14:21:21 -04:00
maang-h
96af8f31ae
community[patch]: Invoke callback prior to yielding token (#23638)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for ChatZhipuAI.
- **Issue:** the issue #16913
2024-07-01 18:12:24 +00:00
Eugene Yurtsev
b5aef4cf97
core[patch]: Fix llm string representation for serializable models (#23416)
Fix LLM string representation for serializable objects.

Fix for issue: https://github.com/langchain-ai/langchain/issues/23257

The llm string of serializable chat models is the serialized
representation of the object. LangChain serialization dumps some basic
information about non serializable objects including their repr() which
includes an object id.

This means that if a chat model has any non serializable fields (e.g., a
cache), then any new instantiation of the those fields will change the
llm representation of the chat model and cause chat misses.

i.e., re-instantiating a postgres cache would result in cache misses!
2024-07-01 14:06:33 -04:00
nobbbbby
3904f2cd40
core: fix NameError (#23658)
**Description:** In the chat_models module of the language model, the
import statement for BaseModel has been moved from the conditionally
imported section to the main import area, fixing `NameError `.
**Issue:** fix `NameError `
2024-07-01 17:51:23 +00:00