Commit Graph

550 Commits

Author SHA1 Message Date
Bagatur
1241a004cb fmt 2024-09-04 11:44:59 -07:00
Bagatur
4ba14ae9e5 fmt 2024-09-04 11:34:59 -07:00
Bagatur
dba308447d fmt 2024-09-04 11:28:04 -07:00
Yash Parmar
51dae57357
community[minor]: jina search tools integrating (jina reader) (#23339)
- **PR title**: "community: add Jina Search tool"
- **Description:** Added the Jina Search tool for querying the Jina
search API. This includes the implementation of the JinaSearchAPIWrapper
and the JinaSearch tool, along with a Jupyter notebook example
demonstrating its usage.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** [Twitter
handle](https://x.com/yashp3020?t=7wM0gQ7XjGciFoh9xaBtqA&s=09)


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-02 14:52:14 -07:00
Alexander KIRILOV
6a8f8a56ac
community[patch]: added content_columns option to CSVLoader (#23809)
**Description:** 
Adding a new option to the CSVLoader that allows us to implicitly
specify the columns that are used for generating the Document content.
Currently these are implicitly set as "all fields not part of the
metadata_columns".

In some cases however it is useful to have a field both as a metadata
and as part of the document content.
2024-09-02 20:25:53 +00:00
Bruno Alvisio
ab527027ac
community: Resolve refs recursively when generating openai_fn from OpenAPI spec (#19002)
- **Description:** This PR is intended to improve the generation of
payloads for OpenAI functions when converting from an OpenAPI spec file.
The solution is to recursively resolve `$refs`.
Currently when converting OpenAPI specs into OpenAI functions using
`openapi_spec_to_openai_fn`, if the schemas have nested references, the
generated functions contain `$ref` that causes the LLM to generate
payloads with an incorrect schema.

For example, for the for OpenAPI spec:

```
text = """
{
  "openapi": "3.0.3",
  "info": {
    "title": "Swagger Petstore - OpenAPI 3.0",
    "termsOfService": "http://swagger.io/terms/",
    "contact": {
      "email": "apiteam@swagger.io"
    },
    "license": {
      "name": "Apache 2.0",
      "url": "http://www.apache.org/licenses/LICENSE-2.0.html"
    },
    "version": "1.0.11"
  },
  "externalDocs": {
    "description": "Find out more about Swagger",
    "url": "http://swagger.io"
  },
  "servers": [
    {
      "url": "https://petstore3.swagger.io/api/v3"
    }
  ],
  "tags": [
    {
      "name": "pet",
      "description": "Everything about your Pets",
      "externalDocs": {
        "description": "Find out more",
        "url": "http://swagger.io"
      }
    },
    {
      "name": "store",
      "description": "Access to Petstore orders",
      "externalDocs": {
        "description": "Find out more about our store",
        "url": "http://swagger.io"
      }
    },
    {
      "name": "user",
      "description": "Operations about user"
    }
  ],
  "paths": {
    "/pet": {
      "post": {
        "tags": [
          "pet"
        ],
        "summary": "Add a new pet to the store",
        "description": "Add a new pet to the store",
        "operationId": "addPet",
        "requestBody": {
          "description": "Create a new pet in the store",
          "content": {
            "application/json": {
              "schema": {
                "$ref": "#/components/schemas/Pet"
              }
            }
          },
          "required": true
        },
        "responses": {
          "200": {
            "description": "Successful operation",
            "content": {
              "application/json": {
                "schema": {
                  "$ref": "#/components/schemas/Pet"
                }
              }
            }
          }
        }
      }
    }
  },
  "components": {
    "schemas": {
      "Tag": {
        "type": "object",
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64"
          },
          "model_type": {
            "type": "number"
          }
        }
      },
      "Category": {
        "type": "object",
        "required": [
          "model",
          "year",
          "age"
        ],
        "properties": {
          "year": {
            "type": "integer",
            "format": "int64",
            "example": 1
          },
          "model": {
            "type": "string",
            "example": "Ford"
          },
          "age": {
            "type": "integer",
            "example": 42
          }
        }
      },
      "Pet": {
        "required": [
          "name"
        ],
        "type": "object",
        "properties": {
          "id": {
            "type": "integer",
            "format": "int64",
            "example": 10
          },
          "name": {
            "type": "string",
            "example": "doggie"
          },
          "category": {
            "$ref": "#/components/schemas/Category"
          },
          "tags": {
            "type": "array",
            "items": {
              "$ref": "#/components/schemas/Tag"
            }
          },
          "status": {
            "type": "string",
            "description": "pet status in the store",
            "enum": [
              "available",
              "pending",
              "sold"
            ]
          }
        }
      }
    }
  }
}
"""
```

Executing:
```
spec = OpenAPISpec.from_text(text)
pet_openai_functions, pet_callables = openapi_spec_to_openai_fn(spec)
response = model.invoke("Create a pet named Scott", functions=pet_openai_functions)
```

`pet_open_functions` contains unresolved `$refs`:

```
[
  {
    "name": "addPet",
    "description": "Add a new pet to the store",
    "parameters": {
      "type": "object",
      "properties": {
        "json": {
          "properties": {
            "id": {
              "type": "integer",
              "schema_format": "int64",
              "example": 10
            },
            "name": {
              "type": "string",
              "example": "doggie"
            },
            "category": {
              "ref": "#/components/schemas/Category"
            },
            "tags": {
              "items": {
                "ref": "#/components/schemas/Tag"
              },
              "type": "array"
            },
            "status": {
              "type": "string",
              "enum": [
                "available",
                "pending",
                "sold"
              ],
              "description": "pet status in the store"
            }
          },
          "type": "object",
          "required": [
            "name",
            "photoUrls"
          ]
        }
      }
    }
  }
]
```

and the generated JSON has an incorrect schema (e.g. category is filled
with `id` and `name` instead of `model`, `year` and `age`:

```
{
  "id": 1,
  "name": "Scott",
  "category": {
    "id": 1,
    "name": "Dogs"
  },
  "tags": [
    {
      "id": 1,
      "name": "tag1"
    }
  ],
  "status": "available"
}
```

With this change, the generated JSON by the LLM becomes,
`pet_openai_functions` becomes:

```
[
  {
    "name": "addPet",
    "description": "Add a new pet to the store",
    "parameters": {
      "type": "object",
      "properties": {
        "json": {
          "properties": {
            "id": {
              "type": "integer",
              "schema_format": "int64",
              "example": 10
            },
            "name": {
              "type": "string",
              "example": "doggie"
            },
            "category": {
              "properties": {
                "year": {
                  "type": "integer",
                  "schema_format": "int64",
                  "example": 1
                },
                "model": {
                  "type": "string",
                  "example": "Ford"
                },
                "age": {
                  "type": "integer",
                  "example": 42
                }
              },
              "type": "object",
              "required": [
                "model",
                "year",
                "age"
              ]
            },
            "tags": {
              "items": {
                "properties": {
                  "id": {
                    "type": "integer",
                    "schema_format": "int64"
                  },
                  "model_type": {
                    "type": "number"
                  }
                },
                "type": "object"
              },
              "type": "array"
            },
            "status": {
              "type": "string",
              "enum": [
                "available",
                "pending",
                "sold"
              ],
              "description": "pet status in the store"
            }
          },
          "type": "object",
          "required": [
            "name"
          ]
        }
      }
    }
  }
]
```

and the JSON generated by the LLM is:
```
{
  "id": 1,
  "name": "Scott",
  "category": {
    "year": 2022,
    "model": "Dog",
    "age": 42
  },
  "tags": [
    {
      "id": 1,
      "model_type": 1
    }
  ],
  "status": "available"
}
```

which has the intended schema.

    - **Twitter handle:**: @brunoalvisio

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-09-02 13:17:39 -07:00
xander-art
6cd452d985
Feature/update hunyuan (#25779)
Description: 
    - Add system templates and user templates in integration testing
    - initialize the response id field value to request_id
    - Adjust the default model to hunyuan-pro
    - Remove the default values of Temperature and TopP
    - Add SystemMessage

all the integration tests have passed.
1、Execute integration tests for the first time
<img width="1359" alt="71ca77a2-e9be-4af6-acdc-4d665002bd9b"
src="https://github.com/user-attachments/assets/9298dc3a-aa26-4bfa-968b-c011a4e699c9">

2、Run the integration test a second time
<img width="1501" alt="image"
src="https://github.com/user-attachments/assets/61335416-4a67-4840-bb89-090ba668e237">

Issue: None
Dependencies: None
Twitter handle: None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-09-02 12:55:08 +00:00
Yuwen Hu
566e9ba164
community: add Intel GPU support to ipex-llm llm integration (#22458)
**Description:** [IPEX-LLM](https://github.com/intel-analytics/ipex-llm)
is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local
PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low
latency. This PR adds Intel GPU support to `ipex-llm` llm integration.
**Dependencies:** `ipex-llm`
**Contribution maintainer**: @ivy-lv11 @Oscilloscope98
**tests and docs**: 
- Add: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/docs/docs/integrations/llms/ipex_llm_gpu.ipynb
- Update: langchain/libs/community/tests/llms/test_ipex_llm.py

---------

Co-authored-by: ivy-lv11 <zhicunlv@gmail.com>
2024-09-02 08:49:08 -04:00
ZhangShenao
fd0f147df3
Improvement[Community] Add tool-calling test case for ChatZhipuAI (#25884)
- Add tool-calling test case for `ChatZhipuAI`
2024-08-30 12:05:43 -04:00
默奕
6377185291
add neo4j query constructor for self query (#25288)
- [x] **PR title - community: add neo4j query constructor for self
query**

- [x] **PR message**
- **Description:** adding a Neo4jTranslator so that the Neo4j vector
database can use SelfQueryRetriever
    - **Issue:** this issue had been raised before in #19748
    - **Dependencies:** none. 
    - **Twitter handle:** @moyi_dang
- p.s. I have not added the query constructor in BUILTIN_TRANSLATORS in
this PR, I want to make changes to only one package at a time.

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-30 14:54:33 +00:00
Allan Ascencio
a8af396a82
added octoai test (#21793)
- [ ] **PR title**: community: add tests for ChatOctoAI

- [ ] **PR message**: 
Description: Added unit tests for the ChatOctoAI class in the community
package to ensure proper validation and default values. These tests
verify the correct initialization of fields, the handling of missing
required parameters, and the proper setting of aliases.
Issue: N/A
Dependencies: None

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-08-29 15:07:27 +00:00
Param Singh
69f9acb60f
premai[patch]: Standardize premai params (#21513)
Thank you for contributing to LangChain!

community:premai[patch]: standardize init args

- updated `temperature` with Pydantic Field, updated the unit test.
- updated `max_tokens` with Pydantic Field, updated the unit test.
- updated `max_retries` with Pydantic Field, updated the unit test.

Related to #20085

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-29 11:01:28 -04:00
Guangdong Liu
fcf9230257
community(sparkllm): Add function call support in Sparkllm chat model. (#20607)
- **Description:** Add function call support in Sparkllm chat model.
Related documents
https://www.xfyun.cn/doc/spark/Web.html#_2-function-call%E8%AF%B4%E6%98%8E
- @baskaryan

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-29 14:38:39 +00:00
Mikhail Khludnev
a017f49fd3
comminity[patch]: fix #25575 YandexGPTs for _grpc_metadata (#25617)
it fixes two issues:

### YGPTs are broken #25575

```
File ....conda/lib/python3.11/site-packages/langchain_community/embeddings/yandex.py:211, in _make_request(self, texts, **kwargs)
..
--> 211 res = stub.TextEmbedding(request, metadata=self._grpc_metadata)  # type: ignore[attr-defined]

AttributeError: 'YandexGPTEmbeddings' object has no attribute '_grpc_metadata'
```
My gut feeling that #23841 is the cause.

I have to drop leading underscore from `_grpc_metadata` for quickfix,
but I just don't know how to do it _pydantic_ enough.

### minor issue:

if we use `api_key`, which is not the best practice the code fails with 

```
File ~/git/...../python3.11/site-packages/langchain_community/embeddings/yandex.py:119, in YandexGPTEmbeddings.validate_environment(cls, values)
...

AttributeError: 'tuple' object has no attribute 'append'
```

- Added new integration test. But it requires YGPT env available and
active account. I don't know how int tests dis\enabled in CI.
 - added small unit tests with mocks. Should be fine.

---------

Co-authored-by: mikhail-khludnev <mikhail_khludnev@rntgroup.com>
2024-08-28 18:48:10 -07:00
Serena Ruan
850bf89e48
community[patch]: Support passing extra params for executing functions in UCFunctionToolkit (#25652)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


Support passing extra params when executing UC functions:
The params should be a dictionary with key EXECUTE_FUNCTION_ARG_NAME,
the assumption is that the function itself doesn't use such variable
name (starting and ending with double underscores), and if it does we
raise Exception.
If invalid params passing to the execute_statement, we raise Exception
as well.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-28 18:47:32 -07:00
Moritz Schlager
555f97becb
community[patch]: fix model initialization bug for deepinfra (#25727)
### Description
adds an init method to ChatDeepInfra to set the model_name attribute
accordings to the argument
### Issue
currently, the model_name specified by the user during initialization of
the ChatDeepInfra class is never set. Therefore, it always chooses the
default model (meta-llama/Llama-2-70b-chat-hf, however probably since
this is deprecated it always uses meta-llama/Llama-3-70b-Instruct). We
stumbled across this issue and fixed it as proposed in this pull
request. Feel free to change the fix according to your coding guidelines
and style, this is just a proposal and we want to draw attention to this
problem.
### Dependencies
no additional dependencies required

Feel free to contact me or @timo282 and @finitearth if you have any
questions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-08-28 02:02:35 -07:00
Tomaz Bratanic
f359e6b0a5
Add mmr to neo4j vector (#25765) 2024-08-27 08:55:19 -04:00
Luis Valencia
99f9a664a5
community: Azure Search Vector Store is missing Access Token Authentication (#24330)
Added Azure Search Access Token Authentication instead of API KEY auth.
Fixes Issue: https://github.com/langchain-ai/langchain/issues/24263
Dependencies: None
Twitter: @levalencia

@baskaryan

Could you please review? First time creating a PR that fixes some code.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-26 15:41:50 -07:00
maang-h
a566a15930
Fix MoonshotChat instantiate with alias (#25755)
- **Description:**
   -  Fix `MoonshotChat` instantiate with alias
   - Add `MoonshotChat` to `__init__.py`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-26 17:33:22 +00:00
Sharmistha S. Gupta
90439b12f6
Added support for Nebula Chat model (#21925)
Description: Added support for Nebula Chat model in addition to Nebula
Instruct
Dependencies: N/A
Twitter handle: @Symbldotai

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-23 22:34:32 +00:00
Ian
64ace25eb8
<Community>: tidb vector support vector index (#19984)
This PR introduces adjustments to ensure compatibility with the recently
released preview version of [TiDB Serverless Vector
Search](https://tidb.cloud/ai), aiming to prevent user confusion.

- TiDB Vector now supports vector indexing with cosine and l2 distance
strategies, although inner_product remains unsupported.
- Changing the distance strategy is currently not supported, so the test
cased should be adjusted.
2024-08-23 13:59:23 -04:00
Austin Burdette
f355a98bb6
community:yuan2[patch]: standardize init args (#21462)
updated stop and request_timeout so they aliased to stop_sequences, and
timeout respectively. Added test that both continue to set the same
underlying attributes.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-08-23 17:56:19 +00:00
Christophe Bornet
7f1e444efa
partners: Use simsimd types (#25299)
The simsimd package [now has
types](https://github.com/ashvardanian/SimSIMD/releases/tag/v5.0.0)
2024-08-23 10:41:39 -04:00
Erik Lindgren
583b0449eb
community[patch]: Fix Hybrid Search for non-Databricks managed embeddings (#25590)
Description: Send both the query and query_embedding to the Databricks
index for hybrid search.

Issue: When using hybrid search with non-Databricks managed embedding we
currently don't pass both the embedding and query_text to the index.
Hybrid search requires both of these. This change fixes this issue for
both `similarity_search` and `similarity_search_by_vector`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-23 08:57:13 +00:00
Rajendra Kadam
1f1679e960
community: Refactor PebbloSafeLoader (#25582)
**Refactor PebbloSafeLoader**
  - Created `APIWrapper` and moved API logic into it.
  - Moved helper functions to the utility file.
  - Created smaller functions and methods for better readability.
  - Properly read environment variables.
  - Removed unused code.

**Issue:** NA
**Dependencies:** NA
**tests**:  Updated
2024-08-22 11:46:52 -04:00
maang-h
015ab91b83
community[patch]: Add ToolMessage for ChatZhipuAI (#25547)
- **Description:** Add ToolMessage for `ChatZhipuAI` to solve the issue
#25490
2024-08-19 11:26:38 -04:00
ccurme
b83f1eb0d5
core, partners: implement standard tracing params for LLMs (#25410) 2024-08-16 13:18:09 -04:00
ccurme
8afbab4cf6
langchain[patch]: deprecate various chains (#25310)
- [x] NatbotChain: move to community, deprecate langchain version.
Update to use `prompt | llm | output_parser` instead of LLMChain.
- [x] LLMMathChain: deprecate + add langgraph replacement example to API
ref
- [x] HypotheticalDocumentEmbedder (retriever): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] FlareChain: update to use `prompt | llm | output_parser` instead
of LLMChain
- [x] ConstitutionalChain: deprecate + add langgraph replacement example
to API ref
- [x] LLMChainExtractor (document compressor): update to use `prompt |
llm | output_parser` instead of LLMChain
- [x] LLMChainFilter (document compressor): update to use `prompt | llm
| output_parser` instead of LLMChain
- [x] RePhraseQueryRetriever (retriever): update to use `prompt | llm |
output_parser` instead of LLMChain
2024-08-15 10:49:26 -04:00
ccurme
ba167dc158
community[patch]: update connection string in azure cosmos integration test (#25438) 2024-08-15 14:07:54 +00:00
maang-h
089f5e6cad
Standardize SparkLLM (#25239)
- **Description:** Standardize SparkLLM, include:
  - docs, the issue #24803 
  - to support stream
  - update api url
  - model init arg names, the issue #20085
2024-08-13 09:50:12 -04:00
ccurme
e77eeee6ee
core[patch]: add standard tracing params for retrievers (#25240) 2024-08-12 14:51:59 +00:00
ZhangShenao
43deed2a95
Improvement[Embeddings] Add dimension support to ZhipuAIEmbeddings (#25274)
- In the in ` embedding-3 ` and later models of Zhipu AI, it is
supported to specify the dimensions parameter of Embedding. Ref:
https://bigmodel.cn/dev/api#text_embedding-3 .
- Add test case for `embedding-3` model by assigning dimensions.
2024-08-11 16:20:37 -04:00
Eugene Yurtsev
b6f0174bb9
community[patch],core[patch]: Update EdenaiTool root_validator and add unit test in core (#25233)
This PR gets rid `root_validators(allow_reuse=True)` logic used in
EdenAI Tool in preparation for pydantic 2 upgrade.
- add another test to secret_from_env_factory
2024-08-09 15:59:27 +00:00
Eugene Yurtsev
6e57aa7c36
community[patch]: Remove usage of @root_validator(allow_reuse=True) (#25235)
Remove usage of @root_validator(allow_reuse=True)
2024-08-09 10:57:42 -04:00
Eugene Yurtsev
98779797fe
community[patch]: Use get_fields adapter for pydantic (#25191)
Change all usages of __fields__ with get_fields adapter merged into
langchain_core.

Code mod generated using the following grit pattern:

```
engine marzano(0.1)
language python


`$X.__fields__` => `get_fields($X)` where {
    add_import(source="langchain_core.utils.pydantic", name="get_fields")
}
```
2024-08-08 14:43:09 -04:00
Eugene Yurtsev
bf5193bb99
community[patch]: Upgrade pydantic extra (#25185)
Upgrade to using a literal for specifying the extra which is the
recommended approach in pydantic 2.

This works correctly also in pydantic v1.

```python
from pydantic.v1 import BaseModel

class Foo(BaseModel, extra="forbid"):
    x: int

Foo(x=5, y=1)
```

And 


```python
from pydantic.v1 import BaseModel

class Foo(BaseModel):
    x: int

    class Config:
      extra = "forbid"

Foo(x=5, y=1)
```


## Enum -> literal using grit pattern:

```
engine marzano(0.1)
language python
or {
    `extra=Extra.allow` => `extra="allow"`,
    `extra=Extra.forbid` => `extra="forbid"`,
    `extra=Extra.ignore` => `extra="ignore"`
}
```

Resorted attributes in config and removed doc-string in case we will
need to deal with going back and forth between pydantic v1 and v2 during
the 0.3 release. (This will reduce merge conflicts.)


## Sort attributes in Config:

```
engine marzano(0.1)
language python


function sort($values) js {
    return $values.text.split(',').sort().join("\n");
}


class_definition($name, $body) as $C where {
    $name <: `Config`,
    $body <: block($statements),
    $values = [],
    $statements <: some bubble($values) assignment() as $A where {
        $values += $A
    },
    $body => sort($values),
}

```
2024-08-08 17:20:39 +00:00
maang-h
0ba125c3cd
docs: Standardize QianfanLLMEndpoint LLM (#25139)
- **Description:** Standardize QianfanLLMEndpoint LLM,include:
  - docs, the issue #24803 
  - model init arg names, the issue #20085
2024-08-07 10:57:27 -04:00
Pat Patterson
7e7fcf5b1f
community: Fix ValidationError on creating GPT4AllEmbeddings with no gpt4all_kwargs (#25124)
- **Description:** Instantiating `GPT4AllEmbeddings` with no
`gpt4all_kwargs` argument raised a `ValidationError`. Root cause: #21238
added the capability to pass `gpt4all_kwargs` through to the `GPT4All`
instance via `Embed4All`, but broke code that did not specify a
`gpt4all_kwargs` argument.
- **Issue:** #25119 
- **Dependencies:** None
- **Twitter handle:** [`@metadaddy`](https://twitter.com/metadaddy)
2024-08-07 13:34:01 +00:00
Virat Singh
264ab96980
community: Add stock market tools from financialdatasets.ai (#25025)
**Description:**
In this PR, I am adding three stock market tools from
financialdatasets.ai (my API!):
- get balance sheets
- get cash flow statements
- get income statements

Twitter handle: [@virattt](https://twitter.com/virattt)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-08-06 18:28:12 +00:00
maang-h
1028af17e7
docs: Standardize Tongyi (#25103)
- **Description:** Standardize Tongyi LLM,include:
  - docs, the issue #24803
  - model init arg names, the issue #20085
2024-08-06 11:44:12 -04:00
Dobiichi-Origami
061ed250f6
delete the default model value from langchain and discard the need fo… (#24915)
- description: I remove the limitation of mandatory existence of
`QIANFAN_AK` and default model name which langchain uses cause there is
already a default model nama underlying `qianfan` SDK powering langchain
component.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-08-06 14:11:05 +00:00
ZhangShenao
cda79dbb6c
community[patch]: Optimize test case for MoonshotChat (#25050)
Optimize test case for `MoonshotChat`. Use standard
ChatModelIntegrationTests.
2024-08-05 10:11:25 -04:00
maang-h
f5da0d6d87
docs: Standardize MiniMaxEmbeddings (#24983)
- **Description:** Standardize MiniMaxEmbeddings
  - docs, the issue #24856 
  - model init arg names, the issue #20085
2024-08-03 14:01:23 -04:00
Isaac Francisco
73570873ab
docs: standardizing tavily tool docs (#24736)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-02 22:25:27 +00:00
Bagatur
8e2316b8c2
community[patch]: Release 0.2.11 (#24989) 2024-08-02 20:08:44 +00:00
ZhangShenao
71c0564c9f
community[patch]: Add test case for MoonshotChat (#24960)
Add test case for `MoonshotChat`.
2024-08-02 09:37:31 -04:00
Serena Ruan
1827bb4042
community[patch]: support bind_tools for ChatMlflow (#24547)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- **Description:** 
Support ChatMlflow.bind_tools method
Tested in Databricks:
<img width="836" alt="image"
src="https://github.com/user-attachments/assets/fa28ef50-0110-4698-8eda-4faf6f0b9ef8">



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
2024-08-01 08:43:07 -07:00
Nikita Pakunov
c776471ac6
community: fix AttributeError: 'YandexGPT' object has no attribute '_grpc_metadata' (#24432)
Fixes #24049

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-31 21:18:33 +00:00
Eugene Yurtsev
d24b82357f
community[patch]: Add missing annotations (#24890)
This PR adds annotations in comunity package.

Annotations are only strictly needed in subclasses of BaseModel for
pydantic 2 compatibility.

This PR adds some unnecessary annotations, but they're not bad to have
regardless for documentation pages.
2024-07-31 18:13:44 +00:00
Rajendra Kadam
a6add89bd4
community[minor]: [PebbloSafeLoader] Implement content-size-based batching (#24871)
- **Title:** [PebbloSafeLoader] Implement content-size-based batching in
the classification flow(loader/doc API)
- **Description:** 
- Implemented content-size-based batching in the loader/doc API, set to
100KB with no external configuration option, intentionally hard-coded to
prevent timeouts.
    - Remove unused field(pb_id) from doc_metadata
- **Issue:** NA
- **Dependencies:** NA
- **Add tests and docs:** Updated
2024-07-31 09:10:28 -04:00