Commit Graph

4942 Commits

Author SHA1 Message Date
Vadym Barda
9bb623381b
core[minor]: update conversion utils to handle RemoveMessage (#23840) 2024-07-03 16:13:31 -04:00
Eugene Yurtsev
4ab78572e7
core[patch]: Speed up unit tests for imports (#23837)
Speed up unit tests for imports
2024-07-03 15:55:15 -04:00
Nico Puhlmann
4a15fce516
langchain: update declarative_base import (#20056)
**Description**: The ``declarative_base()`` function is now available as
sqlalchemy.orm.declarative_base(). (depreca ted since: 2.0) (Background
on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-07-03 15:52:35 -04:00
Théo Deschamps
39b19cf764
core[patch]: extract input variables for path and detail keys in order to format an ImagePromptTemplate (#22613)
- Description: Add support for `path` and `detail` keys in
`ImagePromptTemplate`. Previously, only variables associated with the
`url` key were considered. This PR allows for the inclusion of a local
image path and a detail parameter as input to the format method.
- Issues:
    - fixes #20820 
    - related to #22024 
- Dependencies: None
- Twitter handle: @DeschampsTho5

---------

Co-authored-by: tdeschamps <tdeschamps@kameleoon.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-03 18:58:42 +00:00
Bagatur
a4798802ef
cli[patch]: ruff 0.5 (#23833) 2024-07-03 18:33:15 +00:00
Leonid Ganeline
55f6f91f17
core[patch]: docstrings output_parsers (#23825)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 14:27:40 -04:00
Philippe PRADOS
26cee2e878
partners[patch]: MongoDB vectorstore to return and accept string IDs (#23818)
The mongdb have some errors.
- `add_texts() -> List` returns a list of `ObjectId`, and not a list of
string
- `delete()` with `id` never remove chunks.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-07-03 14:14:08 -04:00
Ikko Eltociear Ashimine
75734fbcf1
community: fix typo in unit tests for test_zenguard.py (#23819)
enviroment -> environment


- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"
2024-07-03 14:05:42 -04:00
Bagatur
a0c2281540
infra: update mypy 1.10, ruff 0.5 (#23721)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
2024-07-03 10:33:27 -07:00
William FH
6cd56821dc
[Core] Unify function schema parsing (#23370)
Use pydantic to infer nested schemas and all that fun.
Include bagatur's convenient docstring parser
Include annotation support


Previously we didn't adequately support many typehints in the
bind_tools() method on raw functions (like optionals/unions, nested
types, etc.)
2024-07-03 09:55:38 -07:00
Oguz Vuruskaner
2a2c0d1a94
community[deepinfra]: fix tool call parsing. (#23162)
This PR includes fix for DeepInfra tool call parsing.
2024-07-03 12:11:37 -04:00
maang-h
525109e506
feat: Implement ChatBaichuan asynchronous interface (#23589)
- **Description:** Add interface to `ChatBaichuan` to support
asynchronous requests
    - `_agenerate` method
    - `_astream` method

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-03 12:10:04 -04:00
Leonid Ganeline
716a316654
core: docstrings indexing (#23785)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:27:34 -04:00
Leonid Ganeline
30fdc2dbe7
core: docstrings messages (#23788)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-07-03 11:25:00 -04:00
ccurme
54e730f6e4
fireworks[patch]: read from tool calls attribute (#23820) 2024-07-03 11:11:17 -04:00
Bagatur
ebb404527f
anthropic[patch]: Release 0.1.19 (#23783) 2024-07-02 18:17:25 -04:00
Bagatur
6168c846b2
openai[patch]: Release 0.1.14 (#23782) 2024-07-02 18:17:15 -04:00
Bagatur
cb9812593f
openai[patch]: expose model request payload (#23287)
![Screenshot 2024-06-21 at 3 12 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/6243a01f-1ef6-4085-9160-2844d9f2b683)
2024-07-02 17:43:55 -04:00
Bagatur
ed200bf2c4
anthropic[patch]: expose payload (#23291)
![Screenshot 2024-06-21 at 4 56 02
PM](https://github.com/langchain-ai/langchain/assets/22008038/a2c6224f-3741-4502-9607-1a726a0551c9)
2024-07-02 17:43:47 -04:00
Bagatur
7a3d8e5a99
core[patch]: Release 0.2.11 (#23780) 2024-07-02 17:35:57 -04:00
Bagatur
d677dadf5f
core[patch]: mark RemoveMessage beta (#23656) 2024-07-02 21:27:21 +00:00
ccurme
1d54ac93bb
ai21[patch]: release 0.1.7 (#23781) 2024-07-02 21:24:13 +00:00
Asaf Joseph Gardin
320dc31822
partners: AI21 Labs Jamba Streaming Support (#23538)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Added support for streaming in AI21 Jamba Model
    - **Twitter handle:** https://github.com/AI21Labs


- [x] **Add tests and docs**: If you're adding a new integration, please
include

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 17:15:46 -04:00
Qingchuan Hao
5cd4083457
community: make bing web search as the only option (#23523)
This PR make bing web search as the option for BingSearchAPIWrapper to
facilitate and simply the user interface on Langchain.
This is a follow-up work of
https://github.com/langchain-ai/langchain/pull/23306.
2024-07-02 17:13:54 -04:00
ccurme
7c1cddf1b7
anthropic[patch]: release 0.1.18 (#23778) 2024-07-02 16:46:47 -04:00
ccurme
c9dac59008
anthropic[patch]: fix model name in some integration tests (#23779) 2024-07-02 20:45:52 +00:00
Bagatur
7a6c06cadd
anthropic[patch]: tool output parser fix (#23647) 2024-07-02 16:33:22 -04:00
ccurme
46cbf0e4aa
anthropic[patch]: use core output parsers for structured output (#23776)
Also add to standard tests for structured output.
2024-07-02 16:15:26 -04:00
kiarina
dc396835ed
langchain_anthropic: add stop_reason in ChatAnthropic stream result (#23689)
`ChatAnthropic` can get `stop_reason` from the resulting `AIMessage` in
`invoke` and `ainvoke`, but not in `stream` and `astream`.
This is a different behavior from `ChatOpenAI`.
It is possible to get `stop_reason` from `stream` as well, since it is
needed to determine the next action after the LLM call. This would be
easier to handle in situations where only `stop_reason` is needed.

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
2024-07-02 15:16:20 -04:00
maang-h
e4e28a6ff5
community[patch]: Fix MiniMaxChat validate_environment error (#23770)
- **Description:** Fix some issues in MiniMaxChat 
  - Fix `minimax_api_host` not in `values` error
- Remove `minimax_group_id` from reading environment variables, the
`minimax_group_id` no longer use in MiniMaxChat
  - Invoke callback prior to yielding token, the issus #16913
2024-07-02 13:23:32 -04:00
SN
acc457f645
core[patch]: fix nested sections for mustache templating (#23747)
The prompt template variable detection only worked for singly-nested
sections because we just kept track of whether we were in a section and
then set that to false as soon as we encountered an end block. i.e. the
following:

```
{{#outerSection}}
    {{variableThatShouldntShowUp}}
    {{#nestedSection}}
        {{nestedVal}}
    {{/nestedSection}}
    {{anotherVariableThatShouldntShowUp}}
{{/outerSection}}
```

Would yield `['outerSection', 'anotherVariableThatShouldntShowUp']` as
input_variables (whereas it should just yield `['outerSection']`). This
fixes that by keeping track of the current depth and using a stack.
2024-07-02 10:20:45 -07:00
Eugene Yurtsev
46ff0f7a3c
community[patch]: Update @root_validators to use explicit pre=True or pre=False (#23737) 2024-07-02 10:47:21 -04:00
Igor Drozdov
b664dbcc36
feat(community): add support for tool_calls response (#23765)
When `model_kwargs={"tools": tools}` are passed to `ChatLiteLLM`, they
are executed, but the response is not recognized correctly

Let's add `tool_calls` to the `additional_kwargs`

Thank you for contributing to LangChain!

## ChatAnthropic

I used the following example to verify the output of llm with tools:

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_anthropic import ChatAnthropic

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm = ChatAnthropic(model="claude-3-sonnet-20240229")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
print(ai_msg.tool_calls)
```

I get the following response:

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01UfDA89knrhw3vFV9X47neT'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01NrYVRYae7m7z7tBgyPb3Gd'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01EPFEpDgzL6vV2dTpD9SVP5'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01B5J6tPJXgwwfhQX9BHP2dt'}]
```

## LiteLLM

Based on https://litellm.vercel.app/docs/completion/function_call

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
import litellm

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

response = litellm.completion(model="claude-3-sonnet-20240229", messages=[{'role': 'user', 'content': prompt}], tools=tools)
print(response.choices[0].message.tool_calls)
```

```python
[ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetWeather'), id='toolu_01HeDWV5vP7BDFfytH5FJsja', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetWeather'), id='toolu_01EiLesUSEr3YK1DaE2jxsQv', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetPopulation'), id='toolu_01Xz26zvkBDRxEUEWm9pX6xa', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetPopulation'), id='toolu_01SDqKnsLjvUXuBsgAZdEEpp', type='function')]
```

## ChatLiteLLM

When I try the following

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229", model_kwargs={"tools": tools})
ai_msg = llm.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls)
```

```python
content="Okay, let's find out the current weather and populations for Los Angeles and New York City:" response_metadata={'token_usage': Usage(prompt_tokens=329, completion_tokens=193, total_tokens=522), 'model': 'claude-3-sonnet-20240229', 'finish_reason': 'tool_calls'} id='run-748b7a84-84f4-497e-bba1-320bd4823937-0'
[]
```

---

When I apply the changes of this PR, the output is

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_017D2tGjiaiakB1HadsEFZ4e'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01WrDpJfVqLkPejWzonPCbLW'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_016UKyYrVAV9Pz99iZGgGU7V'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01Sgv1imExFX1oiR1Cw88zKy'}]
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
2024-07-02 10:42:08 -04:00
Eugene Yurtsev
338cef35b4
community[patch]: update @root_validator in utilities namespace (#23768)
Update all utilities to use `pre=True` or `pre=False`

https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 14:33:01 +00:00
wenngong
ee5eedfa04
partners: support reading HuggingFace params from env (#23309)
Description: 
1. partners/HuggingFace module support reading params from env. Not
adjust langchain_community/.../huggingfaceXX modules since they are
deprecated.
  2. pydantic 2 @root_validator migration.

Issue: #22448 #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-07-02 10:12:45 -04:00
antonpibm
ffde8a6a09
Milvus vectorstore: fix pass ids as argument after upsert (#23761)
**Description**: Milvus vectorstore supports both `add_documents` via
the base class and `upsert` method which deletes and re-adds documents
based on their ids

**Issue**: Due to mismatch in the interfaces the ids used by `upsert`
are neglected in `add_documents`, as `ids` are passed as argument in
`upsert` but via `kwargs` is `add_documents`

This caused exceptions and inconsistency in the DB, tested with
`auto_id=False`

**Fix**: pass `ids` via `kwargs` to `add_documents`
2024-07-02 13:45:30 +00:00
Eugene Yurtsev
d084172b63
community[patch]: root validator set explicit pre=False or pre=True (#23764)
See issue: https://github.com/langchain-ai/langchain/issues/22819
2024-07-02 09:42:05 -04:00
mattthomps1
cc55823486
docs: updated PPLX model (#23723)
Description: updated pplx docs to reference a currently [supported
model](https://docs.perplexity.ai/docs/model-cards). pplx-70b-online
->llama-3-sonar-small-32k-online

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-07-02 08:48:49 -04:00
Jacob Lee
7791d92711
community[patch]: Fix requests alias for load_tools (#23734)
CC @baskaryan
2024-07-01 15:02:14 -07:00
Eugene Yurtsev
f24e38876a
community[patch]: Update root_validators to use explicit pre=True or pre=False (#23736) 2024-07-01 17:13:23 -04:00
Yannick Stephan
5b1de2ae93
mistralai: Fixed streaming in MistralAI with ainvoke and callbacks (#22000)
# Fix streaming in mistral with ainvoke 
- [x] **PR title**
- [x] **PR message**
- [x] **Add tests and docs**:
  1. [x] Added a test for the fixed integration.
2. [x] An example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Ran `make format`, `make lint` and `make test`
from the root of the package(s) I've modified.

Hello 

* I Identified an issue in the mistral package where the callback
streaming (see on_llm_new_token) was not functioning correctly when the
streaming parameter was set to True and call with `ainvoke`.
* The root cause of the problem was the streaming not taking into
account. ( I think it's an oversight )
* To resolve the issue, I added the `streaming` attribut.
* Now, the callback with streaming works as expected when the streaming
parameter is set to True.

## How to reproduce

```
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)

# Oberve on_llm_new_token
# Now, the callback is given as streaming tokens, before it was in grouped format.
```

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 20:53:09 +00:00
Eugene Yurtsev
5d2262af34
community[patch]: Update root_validators to use pre=True or pre=False (#23731)
Update root_validators in preparation for pydantic 2 migration.
2024-07-01 20:10:15 +00:00
Eugene Yurtsev
ebcee4f610
core[patch]: Add versionadded to get_by_ids (#23728) 2024-07-01 15:16:00 -04:00
Eugene Yurtsev
e800f6bb57
core[minor]: Create BaseMedia object (#23639)
This PR implements a BaseContent object from which Document and Blob
objects will inherit proposed here:
https://github.com/langchain-ai/langchain/pull/23544

Alternative: Create a base object that only has an identifier and no
metadata.

For now decided against it, since that refactor can be done at a later
time. It also feels a bit odd since our IDs are optional at the moment.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 15:07:30 -04:00
Chip Davis
04bc5f1a95
partners[azure]: fix having openai_api_base set for other packages (#22068)
This fix is for #21726. When having other packages installed that
require the `openai_api_base` environment variable, users are not able
to instantiate the AzureChatModels or AzureEmbeddings.

This PR adds a new value `ignore_openai_api_base` which is a bool. When
set to True, it sets `openai_api_base` to `None`

Two new tests were added for the `test_azure` and a new file
`test_azure_embeddings`

A different approach may be better for this. If you can think of better
logic, let me know and I can adjust it.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-07-01 18:35:20 +00:00
Nuno Campos
b36e95caa9
core[patch]: use async messages where possible (#23718)
Fix #23716

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 18:33:05 +00:00
Spyros Avlonitis
8cfb2fa1b7
core[minor]: Add maxsize for InMemoryCache (#23405)
This PR introduces a maxsize parameter for the InMemoryCache class,
allowing users to specify the maximum number of items to store in the
cache. If the cache exceeds the specified maximum size, the oldest items
are removed. Additionally, comprehensive unit tests have been added to
ensure all functionalities are thoroughly tested. The tests are written
using pytest and cover both synchronous and asynchronous methods.

Twitter: @spyrosavl

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-07-01 14:21:21 -04:00
maang-h
96af8f31ae
community[patch]: Invoke callback prior to yielding token (#23638)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for ChatZhipuAI.
- **Issue:** the issue #16913
2024-07-01 18:12:24 +00:00
Eugene Yurtsev
b5aef4cf97
core[patch]: Fix llm string representation for serializable models (#23416)
Fix LLM string representation for serializable objects.

Fix for issue: https://github.com/langchain-ai/langchain/issues/23257

The llm string of serializable chat models is the serialized
representation of the object. LangChain serialization dumps some basic
information about non serializable objects including their repr() which
includes an object id.

This means that if a chat model has any non serializable fields (e.g., a
cache), then any new instantiation of the those fields will change the
llm representation of the chat model and cause chat misses.

i.e., re-instantiating a postgres cache would result in cache misses!
2024-07-01 14:06:33 -04:00
nobbbbby
3904f2cd40
core: fix NameError (#23658)
**Description:** In the chat_models module of the language model, the
import statement for BaseModel has been moved from the conditionally
imported section to the main import area, fixing `NameError `.
**Issue:** fix `NameError `
2024-07-01 17:51:23 +00:00
Jordy Jackson Antunes da Rocha
a50eabbd48
experimental: LLMGraphTransformer add missing conditional adding restrictions to prompts for LLM that do not support function calling (#22793)
- Description: Modified the prompt created by the function
`create_unstructured_prompt` (which is called for LLMs that do not
support function calling) by adding conditional checks that verify if
restrictions on entity types and rel_types should be added to the
prompt. If the user provides a sufficiently large text, the current
prompt **may** fail to produce results in some LLMs. I have first seen
this issue when I implemented a custom LLM class that did not support
Function Calling and used Gemini 1.5 Pro, but I was able to replicate
this issue using OpenAI models.

By loading a sufficiently large text
```python
from langchain_community.llms import Ollama
from langchain_openai import ChatOpenAI, OpenAI
from langchain_core.prompts import PromptTemplate
import re
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain_core.documents import Document

with open("texto-longo.txt", "r") as file:
    full_text = file.read()
    partial_text = full_text[:4000]

documents = [Document(page_content=partial_text)] # cropped to fit GPT 3.5 context window
```

And using the chat class (that has function calling)
```python
chat_openai = ChatOpenAI(model="gpt-3.5-turbo", model_kwargs={"seed": 42})
chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai)
graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents)
```
It works:
```
>>> print(graph_from_chat_gpt35[0].nodes)
[Node(id="Jesu, Joy of Man's Desiring", type='Music'), Node(id='Godel', type='Person'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='clever way of encoding the complicated expressions as numbers', type='Concept')]
```

But if you try to use the non-chat LLM class (that does not support
function calling)
```python
openai = OpenAI(
    model="gpt-3.5-turbo-instruct",
    max_tokens=1000,
)
gpt35_transformer = LLMGraphTransformer(llm=openai)
graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents)
```

It uses the prompt that has issues and sometimes does not produce any
result
```
>>> print(graph_from_gpt35[0].nodes)
[]
```

After implementing the changes, I was able to use both classes more
consistently:

```shell
>>> chat_gpt35_transformer = LLMGraphTransformer(llm=chat_openai)
>>> graph_from_chat_gpt35 = chat_gpt35_transformer.convert_to_graph_documents(documents)
>>> print(graph_from_chat_gpt35[0].nodes)
[Node(id="Jesu, Joy Of Man'S Desiring", type='Music'), Node(id='Johann Sebastian Bach', type='Person'), Node(id='Godel', type='Person')]
>>> gpt35_transformer = LLMGraphTransformer(llm=openai)
>>> graph_from_gpt35 = gpt35_transformer.convert_to_graph_documents(documents)
>>> print(graph_from_gpt35[0].nodes)
[Node(id='I', type='Pronoun'), Node(id="JESU, JOY OF MAN'S DESIRING", type='Song'), Node(id='larger memory', type='Memory'), Node(id='this nice tree structure', type='Structure'), Node(id='how you can do it all with the numbers', type='Process'), Node(id='JOHANN SEBASTIAN BACH', type='Composer'), Node(id='type of structure', type='Characteristic'), Node(id='that', type='Pronoun'), Node(id='we', type='Pronoun'), Node(id='worry', type='Verb')]
```

The results are a little inconsistent because the GPT 3.5 model may
produce incomplete json due to the token limit, but that could be solved
(or mitigated) by checking for a complete json when parsing it.
2024-07-01 17:33:51 +00:00
Eugene Yurtsev
4f1821db3e
core[minor]: Add get_by_ids to vectorstore interface (#23594)
This PR adds a part of the indexing API proposed in this RFC
https://github.com/langchain-ai/langchain/pull/23544/files.

It allows rolling out `get_by_ids` which should be uncontroversial to
existing vectorstores without introducing new abstractions.

The semantics for this method depend on the ability of identifying
returned documents using the new optional ID field on documents:
https://github.com/langchain-ai/langchain/pull/23411

Alternatives are:

1. Relax the sequence requirement

```python
def get_by_ids(self, ids: Iterable[str], /) -> Iterable[Document]:
```

Rejected:
- implementations are more likley to start batching with bad defaults
- users would need to call list() or we'd need to introduce another
convenience method

2. Support more kwargs

```python

def get_by_ids(self, ids: Sequence[str], /, **kwargs) -> List[Document]:
...
```

Rejected: 
- No need for `batch` parameter since IDs is a sequence
- Output cannot be customized since `Document` is fixed. (e.g.,
parameters could be useful to grab extra metadata like the vector that
was indexed with the Document or to project a part of the document)
2024-07-01 13:04:33 -04:00
Valentin
bf402f902e
community: Fix LanceDB similarity search bug (#23591)
**Description:** LanceDB didn't allow querying the database using
similarity score thresholds because the metrics value was missing. This
PR simply fixes that bug.
**Issue:** not applicable
**Dependencies:** none
**Twitter handle:** not available

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-07-01 16:33:45 +00:00
Bagatur
389a568f9a
standard-tests[patch]: add anthropic format integration test (#23717) 2024-07-01 11:06:04 -04:00
Rafael Pereira
4b9517db85
Jira: Allow Jira access using only the token (#23708)
- **Description:** At the moment the Jira wrapper only accepts the the
usage of the Username and Password/Token at the same time. However Jira
allows the connection using only is useful for enterprise context.

Co-authored-by: rpereira <rafael.pereira@criticalsoftware.com>
2024-07-01 13:13:51 +00:00
Tim Van Wassenhove
24916c6703
community: Register pandas df in duckdb when creating vector_store (#23690)
- **Description:** Register pandas df in duckdb when creating
vector_store
- **Issue:** Resolves #23308
- **Dependencies:** None
- **Twitter handle:** @timvw

Co-authored-by: Tim Van Wassenhove <tim.van.wassenhove@telenetgroup.be>
2024-07-01 09:12:06 -04:00
Bagatur
29aa9d6750
groq[patch]: Release 0.1.6 (#23655) 2024-06-29 07:35:23 -04:00
Bagatur
f2d0c13a15
fireworks[patch]: Release 0.1.4 (#23654) 2024-06-29 07:35:16 -04:00
Bagatur
9a5e35d1ba
mistralai[patch]: Release 0.1.9 (#23653) 2024-06-29 07:35:09 -04:00
Mateusz Szewczyk
a78ccb993c
ibm: Add support for Chat Models (#22979) 2024-06-29 01:59:25 -07:00
Bagatur
af2c05e5f3
openai[patch]: Release 0.1.13 (#23651) 2024-06-28 17:10:30 -07:00
Bagatur
b63c7f10bc
anthropic[patch]: Release 0.1.17 (#23650) 2024-06-28 17:07:08 -07:00
Bagatur
fc8fd49328
openai, anthropic, ...: with_structured_output to pass in explicit tool choice (#23645)
...community, mistralai, groq, fireworks

part of #23644
2024-06-28 16:39:53 -07:00
Bagatur
81064017a9
docs: azure openai docstring (#23643)
part of #22296
2024-06-28 15:15:58 -07:00
Bagatur
381aedcc61
docs: standardize azure openai page (#23642)
part of #22296
2024-06-28 15:15:41 -07:00
Vadym Barda
e8d77002ea
core: add RemoveMessage (#23636)
This change adds a new message type `RemoveMessage`. This will enable
`langgraph` users to manually modify graph state (or have the graph
nodes modify the state) to remove messages by `id`

Examples:

* allow users to delete messages from state by calling

```python
graph.update_state(config, values=[RemoveMessage(id=state.values[-1].id)])
```

* allow nodes to delete messages

```python
graph.add_node("delete_messages", lambda state: [RemoveMessage(id=state[-1].id)])
```
2024-06-28 14:40:02 -07:00
ccurme
8fce8c6771
community: fix extended tests (#23640) 2024-06-28 16:35:38 -04:00
ccurme
5d93916665
openai[patch]: release 0.1.12 (#23641) 2024-06-28 19:51:16 +00:00
Jacob Lee
a032583b17
docs[patch]: Update diagrams (#23613) 2024-06-28 12:36:00 -07:00
ccurme
390ee8d971
standard-tests: add test for structured output (#23631)
- add test for structured output
- fix bug with structured output for Azure
- better testing on Groq (break out Mixtral + Llama3 and add xfails
where needed)
2024-06-28 15:01:40 -04:00
j pradhan
5f21eab491
community:perplexity[patch]: standardize init args (#21794)
updated request_timeout default alias value per related docstring.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

Thank you for contributing to LangChain!

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-28 13:26:12 +00:00
mackong
11483b0fb8
community[patch]: set tool name for tongyi&qianfan llm (#22889)
- **Description:** The name of ToolMessage is default to None, which
makes tool message send to LLM likes
 ```json
{"role": "tool",
   "tool_call_id": "",
   "content": "{\"time\": \"12:12\"}",
   "name": null}
```
But the name seems essential for some LLMs like TongYi Qwen. so we need to set the name use agent_action's tool value.
  - **Issue:** N/A
  - **Dependencies:** N/A
2024-06-28 09:17:05 -04:00
Leonid Ganeline
e4caa41aa9
community: docstrings toolkits (#23616)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-28 08:40:52 -04:00
ccurme
adf2dc13de
community: fix lint (#23611) 2024-06-27 22:12:16 +00:00
Leonid Ganeline
75a44fe951
core: chat_* docstrings (#23412)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-27 17:29:38 -04:00
Bagatur
3b1fcb2a65
chroma[patch]: Release 0.1.2 (#23604) 2024-06-27 13:58:24 -07:00
Eugene Yurtsev
68f348357e
community[patch]: Test InMemoryVectorStore with RWAPI test suite (#23603)
Add standard test suite to InMemoryVectorStore implementation.
2024-06-27 16:43:43 -04:00
Eugene Yurtsev
da7beb1c38
core[patch]: Add unit test when catching generator exit (#23402)
This pr adds a unit test for:
https://github.com/langchain-ai/langchain/pull/22662
And narrows the scope where the exception is caught.
2024-06-27 20:36:07 +00:00
NG Sai Prasanth
5e6d23f27d
community: Standardise tool import for arxiv & semantic scholar (#23578)
- **Description:** Fixing the way users have to import Arxiv and
Semantic Scholar
- **Issue:** Changed to use `from langchain_community.tools.arxiv import
ArxivQueryRun` instead of `from langchain_community.tools.arxiv.tool
import ArxivQueryRun`
    - **Dependencies:** None
    - **Twitter handle:** Nope
2024-06-27 16:35:50 -04:00
ccurme
d04f657424
langchain[patch]: deprecate ConversationChain (#23504)
Would like some feedback on how to best incorporate legacy memory
objects into `RunnableWithMessageHistory`.
2024-06-27 16:32:44 -04:00
Ayo Ayibiowu
c6f700b7cb
fix(community): allow support for disabling max_tokens args (#21534)
This PR fixes an issue with not able to use unlimited/infinity tokens
from the respective provider for the LiteLLM provider.

This is an issue when working in an agent environment that the token
usage can drastically increase beyond the initial value set causing
unexpected behavior.
2024-06-27 16:28:59 -04:00
Leonid Ganeline
c0fdbaac85
langchain: docstrings in agents root (#23561)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-27 15:52:18 -04:00
Leonid Ganeline
b64c4b4750
langchain: docstrings agents nested (#23598)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 19:49:41 +00:00
mackong
70834cd741
community[patch]: support convert FunctionMessage for Tongyi (#23569)
**Description:** For function call agent with Tongyi, cause the
AgentAction will be converted to FunctionMessage by

47f69fe0d8/libs/core/langchain_core/agents.py (L188)
But now Tongyi's *convert_message_to_dict* doesn't support
FunctionMessage

47f69fe0d8/libs/community/langchain_community/chat_models/tongyi.py (L184-L207)
Then next round conversation will be failed by the *TypeError*
exception.

This patch adds the support to convert FunctionMessage for Tongyi.

**Issue:** N/A
**Dependencies:** N/A
2024-06-27 15:49:26 -04:00
Bagatur
d45ece0e58
chroma[patch]: loosen py req (#23599)
currently causes issues if you try adding to a project that supports
py<4
2024-06-27 12:40:59 -07:00
Mohammad Mohtashim
4796b7eb15
[Community [HuggingFace]]: Small Fix for ChatHuggingFace. (#22925)
- **Description:** A small fix where I moved the `available_endpoints`
in order to avoid the token error in the below issue. Also I have added
conftest file and updated the `scripy`,`numpy` versions to support newer
python versions in poetry files.
- **Issue:** #22804

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-27 19:37:20 +00:00
ccurme
bffc3c24a0
openai[patch]: release 0.1.11 (#23596) 2024-06-27 18:48:40 +00:00
ccurme
a1520357c8
openai[patch]: revert addition of "name" to supported properties for tool messages (#23600) 2024-06-27 18:40:04 +00:00
joshc-ai21
16a293cc3a
Small bug fixes (#23353)
Small bug fixes according to your comments

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Baskar Gopinath <73015364+baskargopinath@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Mathis Joffre <51022808+Joffref@users.noreply.github.com>
Co-authored-by: Baur <baur.krykpayev@gmail.com>
Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Rave Harpaz <rave.harpaz@oracle.com>
Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: RUO <61719257+comsa33@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Luis Rueda <userlerueda@gmail.com>
Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: S M Zia Ur Rashid <smziaurrashid@gmail.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: yuncliu <lyc1990@qq.com>
Co-authored-by: wenngong <76683249+wenngong@users.noreply.github.com>
Co-authored-by: gongwn1 <gongwn1@lenovo.com>
Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: asafg <asafg@ai21.com>
Co-authored-by: Asaf Joseph Gardin <39553475+Josephasafg@users.noreply.github.com>
2024-06-27 17:58:22 +00:00
ccurme
5536420bee
openai[patch]: add comment (#23595)
Forgot to push this to
https://github.com/langchain-ai/langchain/pull/23551
2024-06-27 16:47:14 +00:00
andrewmjc
9f0f3c7e29
partners[openai]: Add name field to tool message to match OpenAI spec (#23551)
Discovered alongside @t968914

  - **Description:**
According to OpenAI docs, tool messages (response from calling tools)
must have a 'name' field.

https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models

  - **Issue:** N/A (as of right now)
  - **Dependencies:** N/A
  - **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-27 12:42:36 -04:00
Krista Pratico
85e36b0f50
partners[openai]: only add stream_options to kwargs if requested (#23552)
- **Description:** This PR
https://github.com/langchain-ai/langchain/pull/22854 added the ability
to pass `stream_options` through to the openai service to get token
usage information in the response. Currently OpenAI supports this
parameter, but Azure OpenAI does not yet. For users who proxy their
calls to both services through ChatOpenAI, this breaks when targeting
Azure OpenAI (see related discussion opened in openai-python:
https://github.com/openai/openai-python/issues/1469#issuecomment-2192658630).

> Error code: 400 - {'error': {'code': None, 'message': 'Unrecognized
request argument supplied: stream_options', 'param': None, 'type':
'invalid_request_error'}}

This PR fixes the issue by only adding `stream_options` to the request
if it's actually requested by the user (i.e. set to True). If I'm not
mistaken, we have a test case that already covers this scenario:
https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/tests/integration_tests/chat_models/test_base.py#L398-L399

- **Issue:** Issue opened in openai-python:
https://github.com/openai/openai-python/issues/1469
  - **Dependencies:** N/A
  - **Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-27 12:23:05 -04:00
Eugene Yurtsev
96b72edac8
core[minor]: Add optional ID field to Document schema (#23411)
This PR adds an optional ID field to the document schema.

# 1. Optional or Required

- An optional field will will requrie additional checking for the type
in user code (annoying).
- However, vectorstores currently don't respect this field. So if we
make it
required and start returning random UUIDs that might be even more
confusing
  to users.


**Proposal**: Start with Optional and convert to Required (with default
set to uuid4()) in 1-2 major releases.


# 2. Override __str__ or generic solution in prompts

Overriding __str__ as a simple way to avoid changing user code that
relies on
default str(document) in prompts. 


I considered rolling out a more general solution in prompts
(https://github.com/langchain-ai/langchain/pull/8685),
but to do that we need to:

1. Make things serializable
2. The more general solution would likely need to be backwards
compatible as well
3. It's unclear that one wants to format a List[int] in the same way as
List[Document]. The former should be `,` seperated (likely), the latter
   should be `---` separated (likely).


**Proposal** Start with __str__ override and focus on the vectorstore
APIs, we generalize prompts later
2024-06-27 12:15:58 -04:00
ccurme
5bfcb898ad
openai[patch]: bump sdk version (#23592)
Tests failing with `TypeError: Completions.create() got an unexpected
keyword argument 'parallel_tool_calls'`
2024-06-27 11:57:24 -04:00
Jacob Lee
60fc15a56b
docs[patch]: Update docs introduction and README (#23558)
CC @hwchase17 @baskaryan
2024-06-27 08:51:43 -07:00
mackong
daf733b52e
langchain[minor]: fix comment typo (#23564)
**Description:** fix typo of comment
**Issue:** N/A
**Dependencies:** N/A
2024-06-27 10:09:18 -04:00
Leonid Ganeline
2c9b84c3a8
core[patch]: docstrings agents (#23502)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:50:48 -04:00
Leonid Ganeline
2a5d59b3d7
core[patch]: callbacks docstrings (#23375)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:11:06 -04:00
Leonid Ganeline
1141b08eb8
core: docstrings example_selectors (#23542)
Added missed docstrings. Formatted docstrings to the consistent form.
2024-06-26 17:10:40 -04:00
wenngong
3bf1d98dbf
langchain[patch]: update agent and chains modules root_validators (#23256)
Description: update agent and chains modules Pydantic root_validators.
Issue: the issue #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-26 17:09:50 -04:00
Bagatur
a7ab93479b
anthropic[patch]: Release 0.1.16 (#23549) 2024-06-26 20:49:13 +00:00
Jib
c0fcf76e93
LangChain-MongoDB: [Experimental] Driver-side index creation helper (#19359)
## Description
Created a helper method to make vector search indexes via client-side
pymongo.

**Recent Update** -- Removed error suppressing/overwriting layer in
favor of letting the original exception provide information.

## ToDo's
- [x] Make _wait_untils for integration test delete index
functionalities.
- [x] Add documentation for its use. Highlight it's experimental
- [x] Post Integration Test Results in a screenshot
- [x] Get review from MongoDB internal team (@shaneharvey, @blink1073 ,
@NoahStapp , @caseyclements)



- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Added new integration tests. Not eligible for unit testing since the
operation is Atlas Cloud specific.
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

![image](https://github.com/langchain-ai/langchain/assets/2887713/a3fc8ee1-e04c-4976-accc-fea0eeae028a)


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-06-26 15:07:28 -04:00
maang-h
5070004e8a
docs: Update Tongyi ChatModel docstring (#23540)
- **Description:** Update Tongyi ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-26 13:07:13 -04:00
yonarw
6d0ebbca1e
community: SAP HANA Vector Engine fix for latest HANA release (#23516)
- **Description:** This PR fixes an issue with SAP HANA Cloud QRC03
version. In that version the number to indicate no length being set for
a vector column changed from -1 to 0. The change in this PR support both
behaviours (old/new).
- **Dependencies:** No dependencies have been introduced.

- **Tests**:  The change is covered by previous unit tests.
2024-06-26 13:15:51 +00:00
Roman Solomatin
1e3e05b0c3
openai[patch]: add support for extra_body (#23404)
**Description:** Add support passing extra_body parameter

Some OpenAI compatible API's have additional parameters (for example
[vLLM](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#extra-parameters))
that can be passed thought `extra_body`. Same question in
https://github.com/openai/openai-python/issues/767

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->
2024-06-26 13:11:59 +00:00
Alireza Kashani
c39521b70d
Update grobid.py (#23399)
fixed potential `IndexError: list index out of range` in case there is
no title

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-26 09:11:02 -04:00
Qingchuan Hao
ee282a1d2e
community: add missing link (#23526) 2024-06-26 09:06:28 -04:00
Lincoln Stein
c314222796
Add a conversation memory that combines a (optionally persistent) vectorstore history with a token buffer (#22155)
**langchain: ConversationVectorStoreTokenBufferMemory**

-**Description:** This PR adds ConversationVectorStoreTokenBufferMemory.
It is similar in concept to ConversationSummaryBufferMemory. It
maintains an in-memory buffer of messages up to a preset token limit.
After the limit is hit timestamped messages are written into a
vectorstore retriever rather than into a summary. The user's prompt is
then used to retrieve relevant fragments of the previous conversation.
By persisting the vectorstore, one can maintain memory from session to
session.
-**Issue:** n/a
-**Dependencies:** none
-**Twitter handle:** Please no!!!
- [X] **Add tests and docs**: I looked to see how the unit tests were
written for the other ConversationMemory modules, but couldn't find
anything other than a test for successful import. I need to know whether
you are using pytest.mock or another fixture to simulate the LLM and
vectorstore. In addition, I would like guidance on where to place the
documentation. Should it be a notebook file in docs/docs?

- [X] **Lint and test**: I am seeing some linting errors from a couple
of modules unrelated to this PR.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-25 20:17:10 -07:00
Bagatur
32f8f39974
core[patch]: use args_schema doc for tool description (#23503) 2024-06-25 15:26:35 -07:00
ccurme
6f7fe82830
text-splitters: release 0.2.2 (#23508) 2024-06-25 18:26:05 -04:00
ccurme
62b16fcc6b
experimental: release 0.0.62 (#23507) 2024-06-25 22:01:35 +00:00
ccurme
99ce84ef23
community: release 0.2.6 (#23501) 2024-06-25 21:29:52 +00:00
ccurme
03c41e725e
langchain: release 0.2.6 (#23426) 2024-06-25 21:03:41 +00:00
ccurme
86ca44d451
core: release 0.2.10 (#23420) 2024-06-25 16:26:31 -04:00
Isaac Francisco
85f5d14cef
[docs]: split up tool docs (#22919) 2024-06-25 13:15:08 -07:00
Nuradil
c93d9e66e4
Community: Update and fix ZenGuardTool docs and add ZenguardTool to init files (#23415)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: update docs and add tool to init.py"

- [x] **PR message**: 
- **Description:** Fixed some errors and comments in the docs and added
our ZenGuardTool and additional classes to init.py for easy access when
importing
- **Question:** when will you update the langchain-community package in
pypi to make our tool available?


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Thank you for review!

---------

Co-authored-by: Baur <baur.krykpayev@gmail.com>
2024-06-25 19:26:32 +00:00
William FH
8955bc1866
[Core] Logging: Suppress missing parent warning (#23363) 2024-06-25 14:57:23 -04:00
ccurme
730c551819
core[patch]: export tool output parsers from langchain_core.output_parsers (#23305)
These currently read off AIMessage.tool_calls, and only fall back to
OpenAI parsing if tool calls aren't populated.

Importing these from `openai_tools` (e.g., in our [tool calling
docs](https://python.langchain.com/v0.2/docs/how_to/tool_calling/#tool-calls))
can lead to confusion.

After landing, would need to release core and update docs.
2024-06-25 14:40:42 -04:00
Eugene Yurtsev
7e9e69c758
core[patch]: Add unit test for str and repr for Document (#23414) 2024-06-25 18:28:21 +00:00
Bagatur
92ac0fc9bd
openai[patch]: Release 0.1.10 (#23410) 2024-06-25 17:40:02 +00:00
Bagatur
9d145b9630
openai[patch]: fix tool calling token counting (#23408)
Resolves https://github.com/langchain-ai/langchain/issues/23388
2024-06-25 10:34:25 -07:00
Tomaz Bratanic
22fa32e164
LLM Graph transformer dealing with empty strings (#23368)
Pydantic allows empty strings:

```
from langchain.pydantic_v1 import Field, BaseModel

class Property(BaseModel):
  """A single property consisting of key and value"""
  key: str = Field(..., description="key")
  value: str = Field(..., description="value")

x = Property(key="", value="")
```

Which can produce errors downstream. We simply ignore those records
2024-06-25 13:01:53 -04:00
Riccardo Schirone
4530d851e4
Merge pull request #22662
* core: runnables: special handling GeneratorExit because no error
2024-06-25 08:42:03 -04:00
Qingchuan Hao
ad50702934
community: add default value to bing_search_url (#23306)
bing_search_url is an endpoint to requests bing search resource and is
normally invariant to users, we can give it the default value to simply
the uesages of this utility/tool
2024-06-25 08:08:41 -04:00
ccurme
68e0ae3286
langchain[patch]: update removal target for LLMChain (#23373)
to 1.0

Also improve replacement example in docstring.
2024-06-24 21:51:29 +00:00
wenngong
b33d2346db
community: FlashrankRerank support loading customer client (#23350)
Description: FlashrankRerank Document compressor support loading
customer client
Issue: #23338

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-24 17:50:08 -04:00
maang-h
f58c40b4e3
docs: Update QianfanChatEndpoint ChatModel docstring (#23337)
- **Description:** Update QianfanChatEndpoint ChatModel rich docstring
- **Issue:** the issue #22296
2024-06-24 17:42:46 -04:00
Rahul Triptahi
9ef93ecd7c
community[minor]: Added classification_location parameter in PebbloSafeLoader. (#22565)
Description: Add classifier_location feature flag. This flag enables
Pebblo to decide the classifier location, local or pebblo-cloud.
Unit Tests: N/A
Documentation: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-24 17:30:38 -04:00
wenngong
af620db9c7
partners: add lint docstrings for azure-dynamic-sessions/together modules (#23303)
Description: add lint docstrings for azure-dynamic-sessions/together
modules
Issue: #23188 @baskaryan

test: ruff check passed.
<img width="782" alt="image"
src="https://github.com/langchain-ai/langchain/assets/76683249/bf11783d-65b3-4e56-a563-255eae89a3e4">

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-24 16:26:54 -04:00
yuncliu
398b2b9c51
community[minor]: Add Ascend NPU optimized Embeddings (#20260)
- **Description:** Add NPU support for embeddings

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 20:15:11 +00:00
Luis Rueda
168e9ed3a5
partners: add custom options to MongoDBChatMessageHistory (#22944)
**Description:** Adds options for configuring MongoDBChatMessageHistory
(no breaking changes):
- session_id_key: name of the field that stores the session id
- history_key: name of the field that stores the chat history
- create_index: whether to create an index on the session id field
- index_kwargs: additional keyword arguments to pass to the index
creation

**Discussion:**
https://github.com/langchain-ai/langchain/discussions/22918
**Twitter handle:** @userlerueda

---------

Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-24 19:42:56 +00:00
Eugene Yurtsev
1e750f12f6
standard-tests[minor]: Add standard read write test suite for vectorstores (#23355)
Add standard read write test suite for vectorstores
2024-06-24 19:40:56 +00:00
Eugene Yurtsev
3b3ed72d35
standard-tests[minor]: Add standard tests for BaseStore (#23360)
Add standard tests to base store abstraction. These only work on [str,
str] right now. We'll need to check if it's possible to add
encoder/decoders to generalize
2024-06-24 19:38:50 +00:00
ccurme
e1190c8f3c
mongodb[patch]: fix CI for python 3.12 (#23369) 2024-06-24 19:31:20 +00:00
RUO
2b87e330b0
community: fix issue with nested field extraction in MongodbLoader (#22801)
**Description:** 
This PR addresses an issue in the `MongodbLoader` where nested fields
were not being correctly extracted. The loader now correctly handles
nested fields specified in the `field_names` parameter.

**Issue:** 
Fixes an issue where attempting to extract nested fields from MongoDB
documents resulted in `KeyError`.

**Dependencies:** 
No new dependencies are required for this change.

**Twitter handle:** 
(Optional, your Twitter handle if you'd like a mention when the PR is
announced)

### Changes
1. **Field Name Parsing**:
- Added logic to parse nested field names and safely extract their
values from the MongoDB documents.

2. **Projection Construction**:
- Updated the projection dictionary to include nested fields correctly.

3. **Field Extraction**:
- Updated the `aload` method to handle nested field extraction using a
recursive approach to traverse the nested dictionaries.

### Example Usage
Updated usage example to demonstrate how to specify nested fields in the
`field_names` parameter:

```python
loader = MongodbLoader(
    connection_string=MONGO_URI,
    db_name=MONGO_DB,
    collection_name=MONGO_COLLECTION,
    filter_criteria={"data.job.company.industry_name": "IT", "data.job.detail": { "$exists": True }},
    field_names=[
        "data.job.detail.id",
        "data.job.detail.position",
        "data.job.detail.intro",
        "data.job.detail.main_tasks",
        "data.job.detail.requirements",
        "data.job.detail.preferred_points",
        "data.job.detail.benefits",
    ],
)

docs = loader.load()
print(len(docs))
for doc in docs:
    print(doc.page_content)
```
### Testing
Tested with a MongoDB collection containing nested documents to ensure
that the nested fields are correctly extracted and concatenated into a
single page_content string.
### Note
This change ensures backward compatibility for non-nested fields and
improves functionality for nested field extraction.
### Output Sample
```python
print(docs[:3])
```
```shell
# output sample:
[
    Document(
        # Here in this example, page_content is the combined text from the fields below
        # "position", "intro", "main_tasks", "requirements", "preferred_points", "benefits"
        page_content='all combined contents from the requested fields in the document',
        metadata={'database': 'Your Database name', 'collection': 'Your Collection name'}
    ),
    ...
]
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-24 19:29:11 +00:00
Tomaz Bratanic
aeeda370aa
Sanitize backticks from neo4j labels and types for import (#23367) 2024-06-24 19:05:31 +00:00
Rave Harpaz
f5ff7f178b
Add OCI Generative AI new model support (#22880)
- [x] PR title: 
community: Add OCI Generative AI new model support
 
- [x] PR message:
- Description: adding support for new models offered by OCI Generative
AI services. This is a moderate update of our initial integration PR
16548 and includes a new integration for our chat models under
/langchain_community/chat_models/oci_generative_ai.py
    - Issue: NA
- Dependencies: No new Dependencies, just latest version of our OCI sdk
    - Twitter handle: NA


- [x] Add tests and docs: 
  1. we have updated our unit tests
2. we have updated our documentation including a new ipynb for our new
chat integration


- [x] Lint and test: 
 `make format`, `make lint`, and `make test` run successfully

---------

Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
2024-06-24 14:48:23 -04:00
Baur
aa358f2be4
community: Add ZenGuard tool (#22959)
** Description**
This is the community integration of ZenGuard AI - the fastest
guardrails for GenAI applications. ZenGuard AI protects against:

- Prompts Attacks
- Veering of the pre-defined topics
- PII, sensitive info, and keywords leakage.
- Toxicity
- Etc.

**Twitter Handle** : @zenguardai

- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. Added an integration test
  2. Added colab


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

---------

Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
2024-06-24 17:40:56 +00:00
Mathis Joffre
60103fc4a5
community: Fix OVHcloud 401 Unauthorized on embedding. (#23260)
They are now rejecting with code 401 calls from users with expired or
invalid tokens (while before they were being considered anonymous).
Thus, the authorization header has to be removed when there is no token.

Related to: #23178

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-24 12:58:32 -04:00
Eugene Yurtsev
d90379210a
standard-tests[minor]: Add standard tests for cache (#23357)
Add standard tests for cache abstraction
2024-06-24 15:15:03 +00:00
Leonid Ganeline
987099cfcd
community: toolkits docstrings (#23286)
Added missed docstrings. Formatted docstrings to the consistent form.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-22 14:37:52 +00:00
Rahul Triptahi
0cd3f93361
Enhance metadata of sharepointLoader. (#22248)
Description: 2 feature flags added to SharePointLoader in this PR:

1. load_auth: if set to True, adds authorised identities to metadata
2. load_extended_metadata, adds source, owner and full_path to metadata

Unit tests:N/A
Documentation: To be done.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-06-21 17:03:38 -07:00
Bagatur
bcac6c3aff
openai[patch]: temp fix ignore lint (#23290) 2024-06-21 16:52:52 -07:00
William FH
efb4c12abe
[Core] Add support for inferring Annotated types (#23284)
in bind_tools() / convert_to_openai_function
2024-06-21 15:16:30 -07:00
Vadym Barda
9ac302cb97
core[minor]: update draw_mermaid node label processing (#23285)
This fixes processing issue for nodes with numbers in their labels (e.g.
`"node_1"`, which would previously be relabeled as `"node__"`, and now
are correctly processed as `"node_1"`)
2024-06-21 21:35:32 +00:00
Rajendra Kadam
7ee2822ec2
community: Fix TypeError in PebbloRetrievalQA (#23170)
**Description:** 
Fix "`TypeError: 'NoneType' object is not iterable`" when the
auth_context is absent in PebbloRetrievalQA. The auth_context is
optional; hence, PebbloRetrievalQA should work without it, but it throws
an error at the moment. This PR fixes that issue.

**Issue:** NA
**Dependencies:** None
**Unit tests:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-21 17:04:00 -04:00
Iurii Umnov
3b7b933aa2
community[minor]: OpenAPI agent. Add support for PUT, DELETE and PATCH (#22962)
**Description**: Add PUT, DELETE and PATCH tools to tool list for
OpenAPI agent if dangerous requests are allowed.

**Issue**: https://github.com/langchain-ai/langchain/issues/20469
2024-06-21 20:44:23 +00:00
Guangdong Liu
3c42bf8d97
community(patch):Fix PineconeHynridSearchRetriever not having search_kwargs (#21577)
- close #21521
2024-06-21 16:27:52 -04:00
Rahul Triptahi
4bb3d5c488
[community][quick-fix]: changed from blob.path to blob.path.name in 0365BaseLoader. (#22287)
Description: file_metadata_ was not getting propagated to returned
documents. Changed the lookup key to the name of the blob's path.
Changed blob.path key to blob.path.name for metadata_dict key lookup.
Documentation: N/A
Unit tests: N/A

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-21 15:51:03 -04:00
Bagatur
f824f6d925
docs: fix merge message runs docstring (#23279) 2024-06-21 19:50:50 +00:00
wenngong
f9aea3db07
partners: add lint docstrings for chroma module (#23249)
Description: add lint docstrings for chroma module
Issue: the issue #23188 @baskaryan

test:  ruff check passed.


![image](https://github.com/langchain-ai/langchain/assets/76683249/5e168a0c-32d0-464f-8ddb-110233918019)

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
2024-06-21 19:49:24 +00:00
Bagatur
9eda8f2fe8
docs: fix trim_messages code blocks (#23271) 2024-06-21 17:15:31 +00:00
Bagatur
4c97a9ee53
docs: fix message transformer docstrings (#23264) 2024-06-21 16:10:03 +00:00
Vwake04
0deb98ac0c
pinecone: Fix multiprocessing issue in PineconeVectorStore (#22571)
**Description:**

Currently, the `langchain_pinecone` library forces the `async_req`
(asynchronous required) argument to Pinecone to `True`. This design
choice causes problems when deploying to environments that do not
support multiprocessing, such as AWS Lambda. In such environments, this
restriction can prevent users from successfully using
`langchain_pinecone`.

This PR introduces a change that allows users to specify whether they
want to use asynchronous requests by passing the `async_req` parameter
through `**kwargs`. By doing so, users can set `async_req=False` to
utilize synchronous processing, making the library compatible with AWS
Lambda and other environments that do not support multithreading.

**Issue:**
This PR does not address a specific issue number but aims to resolve
compatibility issues with AWS Lambda by allowing synchronous processing.

**Dependencies:**
None, that I'm aware of.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-21 15:46:01 +00:00
ccurme
75c7c3a1a7
openai: release 0.1.9 (#23263) 2024-06-21 11:15:29 -04:00
Brace Sproul
abe7566d7d
core[minor]: BaseChatModel with_structured_output implementation (#22859) 2024-06-21 08:14:03 -07:00
mackong
360a70c8a8
core[patch]: fix no current event loop for sql history in async mode (#22933)
- **Description:** When use
RunnableWithMessageHistory/SQLChatMessageHistory in async mode, we'll
get the following error:
```
Error in RootListenersTracer.on_chain_end callback: RuntimeError("There is no current event loop in thread 'asyncio_3'.")
```
which throwed by
ddfbca38df/libs/community/langchain_community/chat_message_histories/sql.py (L259).
and no message history will be add to database.

In this patch, a new _aexit_history function which will'be called in
async mode is added, and in turn aadd_messages will be called.

In this patch, we use `afunc` attribute of a Runnable to check if the
end listener should be run in async mode or not.

  - **Issue:** #22021, #22022 
  - **Dependencies:** N/A
2024-06-21 10:39:47 -04:00
Philippe PRADOS
1c2b9cc9ab
core[minor]: Update pgvector transalor for langchain_postgres (#23217)
The SelfQuery PGVectorTranslator is not correct. The operator is "eq"
and not "$eq".
This patch use a new version of PGVectorTranslator from
langchain_postgres.

It's necessary to release a new version of langchain_postgres (see
[here](https://github.com/langchain-ai/langchain-postgres/pull/75)
before accepting this PR in langchain.
2024-06-21 10:37:09 -04:00
Mu Yang
401d469a92
langchain: fix systax warning in create_json_chat_agent (#23253)
fix systax warning in `create_json_chat_agent`

```
.../langchain/agents/json_chat/base.py:22: SyntaxWarning: invalid escape sequence '\ '
  """Create an agent that uses JSON to format its logic, build for Chat Models.
```
2024-06-21 10:05:38 -04:00
mackong
b108b4d010
core[patch]: set schema format for AsyncRootListenersTracer (#23214)
- **Description:** AsyncRootListenersTracer support on_chat_model_start,
it's schema_format should be "original+chat".
  - **Issue:** N/A
  - **Dependencies:**
2024-06-21 09:30:27 -04:00
Bagatur
976b456619
docs: BaseChatModel key methods table (#23238)
If we're moving documenting inherited params think these kinds of tables
become more important

![Screenshot 2024-06-20 at 3 59 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/722266eb-2353-4e85-8fae-76b19bd333e0)
2024-06-20 21:00:22 -07:00
ccurme
a7b4175091
standard tests: add test for tool calling (#23234)
Including streaming
2024-06-20 17:20:11 -04:00
Bagatur
12e0c28a6e
docs: fix chat model methods table (#23233)
rst table not md
![Screenshot 2024-06-20 at 12 37 46
PM](https://github.com/langchain-ai/langchain/assets/22008038/7a03b869-c1f4-45d0-8d27-3e16f4c6eb19)
2024-06-20 19:51:10 +00:00
Zheng Robert Jia
a349fce880
docs[minor],community[patch]: Minor tutorial docs improvement, minor import error quick fix. (#22725)
minor changes to module import error handling and minor issues in
tutorial documents.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-06-20 15:36:49 -04:00
Eugene Yurtsev
7545b1d29b
core[patch]: Fix doc-strings for code blocks (#23232)
Code blocks need extra space around them to be rendered properly by
sphinx
2024-06-20 19:34:52 +00:00
Luis Moros
d5be160af0
community[patch]: Fix sql_databse.from_databricks issue when ran from Job (#23224)
**Desscription**: When the ``sql_database.from_databricks`` is executed
from a Workflow Job, the ``context`` object does not have a
"browserHostName" property, resulting in an error. This change manages
the error so the "DATABRICKS_HOST" env variable value is used instead of
stoping the flow

Co-authored-by: lmorosdb <lmorosdb>
2024-06-20 19:34:15 +00:00
Cory Waddingham
cd6812342e
pinecone[patch]: Update Poetry requirements for pinecone-client >=3.2.2 (#22094)
This change updates the requirements in
`libs/partners/pinecone/pyproject.toml` to allow all versions of
`pinecone-client` greater than or equal to 3.2.2.

This change resolves issue
[21955](https://github.com/langchain-ai/langchain/issues/21955).

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-20 18:59:36 +00:00
Eugene Yurtsev
59d7adff8f
core[patch]: Add clarification about streaming to RunnableLambda (#23227)
Add streaming clarification to runnable lambda docstring.
2024-06-20 16:47:16 +00:00
maang-h
bc4cd9c5cc
community[patch]: Update root_validators ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat, ChatSparkLLM, ChatZhipuAI (#22853)
This PR updates root validators for:

- ChatModels: ChatBaichuan, QianfanChatEndpoint, MiniMaxChat,
ChatSparkLLM, ChatZhipuAI

Issues #22819

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 16:36:41 +00:00
ChrisDEV
cb6cf4b631
Fix return value type of dumpd (#20123)
The return type of `json.loads` is `Any`.

In fact, the return type of `dumpd` must be based on `json.loads`, so
the correction here is understandable.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 16:31:41 +00:00
Guangdong Liu
0bce28cd30
core(patch): Fix encoding problem of load_prompt method (#21559)
- description: Add encoding parameters.
- @baskaryan, @efriis, @eyurtsev, @hwchase17.


![54d25ac7b1d5c2e47741a56fe8ed8ba](https://github.com/langchain-ai/langchain/assets/48236177/ffea9596-2001-4e19-b245-f8a6e231b9f9)
2024-06-20 09:25:54 -07:00
Philippe PRADOS
8711c61298
core[minor]: Adds an in-memory implementation of RecordManager (#13200)
**Description:**
langchain offers three technologies to save data:
-
[vectorstore](https://python.langchain.com/docs/modules/data_connection/vectorstores/)
- [docstore](https://js.langchain.com/docs/api/schema/classes/Docstore)
- [record
manager](https://python.langchain.com/docs/modules/data_connection/indexing)

If you want to combine these technologies in a sample persistence
stategy you need a common implementation for each. `DocStore` propose
`InMemoryDocstore`.

We propose the class `MemoryRecordManager` to complete the system.

This is the prelude to another full-request, which needs a consistent
combination of persistence components.

**Tag maintainer:**
@baskaryan

**Twitter handle:**
@pprados

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-20 12:19:10 -04:00
Leonid Ganeline
51e75cf59d
community: docstrings (#23202)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)
2024-06-20 11:08:13 -04:00
Julian Weng
6a1a0d977a
partners[minor]: Fix value error message for with_structured_output (#22877)
Currently, calling `with_structured_output()` with an invalid method
argument raises `Unrecognized method argument. Expected one of
'function_calling' or 'json_format'`, but the JSON mode option [is now
referred
to](https://python.langchain.com/v0.2/docs/how_to/structured_output/#the-with_structured_output-method)
by `'json_mode'`. This fixes that.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-20 15:03:21 +00:00
Leonid Ganeline
41f7620989
huggingface: docstrings (#23148)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-20 13:22:40 +00:00
ccurme
066a5a209f
huggingface[patch]: fix CI for python 3.12 (#23197) 2024-06-20 09:17:26 -04:00
xyd
9b3a025f9c
fix https://github.com/langchain-ai/langchain/issues/23215 (#23216)
fix bug 
The ZhipuAIEmbeddings class is not working.

Co-authored-by: xu yandong <shaonian@acsx1.onexmail.com>
2024-06-20 13:04:50 +00:00
Bagatur
ad7f2ec67d
standard-tests[patch]: test stop not stop_sequences (#23200) 2024-06-19 18:07:33 -07:00
David DeCaprio
a4bcb45f65
core:Add optional max_messages to MessagePlaceholder (#16098)
- **Description:** Add optional max_messages to MessagePlaceholder
- **Issue:**
[16096](https://github.com/langchain-ai/langchain/issues/16096)
- **Dependencies:** None
- **Twitter handle:** @davedecaprio

Sometimes it's better to limit the history in the prompt itself rather
than the memory. This is needed if you want different prompts in the
chain to have different history lengths.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-06-19 23:39:51 +00:00
shaunakgodbole
7193634ae6
fireworks[patch]: fix api_key alias in Fireworks LLM (#23118)
Thank you for contributing to LangChain!

**Description**
The current code snippet for `Fireworks` had incorrect parameters. This
PR fixes those parameters.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-19 21:14:42 +00:00
Eugene Yurtsev
1fcf875fe3
core[patch]: Document agent schema (#23194)
* Document agent schema
* Refer folks to langgraph for more information on how to create agents.
2024-06-19 20:16:57 +00:00
Eugene Yurtsev
c2d43544cc
core[patch]: Document messages namespace (#23154)
- Moved doc-strings below attribtues in TypedDicts -- seems to render
better on APIReference pages.
* Provided more description and some simple code examples
2024-06-19 15:00:00 -04:00
Eugene Yurtsev
3c917204dc
core[patch]: Add doc-strings to outputs, fix @root_validator (#23190)
- Document outputs namespace
- Update a vanilla @root_validator that was missed
2024-06-19 14:59:06 -04:00
Bagatur
8698cb9b28
infra: add more formatter rules to openai (#23189)
Turns on
https://docs.astral.sh/ruff/settings/#format_docstring-code-format and
https://docs.astral.sh/ruff/settings/#format_skip-magic-trailing-comma

```toml
[tool.ruff.format]
docstring-code-format = true
skip-magic-trailing-comma = true
```
2024-06-19 11:39:58 -07:00
Michał Krassowski
710197e18c
community[patch]: restore compatibility with SQLAlchemy 1.x (#22546)
- **Description:** Restores compatibility with SQLAlchemy 1.4.x that was
broken since #18992 and adds a test run for this version on CI (only for
Python 3.11)
- **Issue:** fixes #19681
- **Dependencies:** None
- **Twitter handle:** `@krassowski_m`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-19 17:58:57 +00:00
Erick Friis
48d6ea427f
upstage: move to external repo (#22506) 2024-06-19 17:56:07 +00:00
Bagatur
0a4ee864e9
openai[patch]: image token counting (#23147)
Resolves #23000

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 10:41:47 -07:00
Jorge Piedrahita Ortiz
b3e53ffca0
community[patch]: sambanova llm integration improvement (#23137)
- **Description:** sambanova sambaverse integration improvement: removed
input parsing that was changing raw user input, and was making to use
process prompt parameter as true mandatory
2024-06-19 10:30:14 -07:00
Jorge Piedrahita Ortiz
e162893d7f
community[patch]: update sambastudio embeddings (#23133)
Description: update sambastudio embeddings integration, now compatible
with generic endpoints and CoE endpoints
2024-06-19 10:26:56 -07:00
Philippe PRADOS
db6f46c1a6
langchain[small]: Change type to BasePromptTemplate (#23083)
```python
Change from_llm(
 prompt: PromptTemplate 
 ...
 )
```
 to
```python
Change from_llm(
 prompt: BasePromptTemplate 
 ...
 )
```
2024-06-19 13:19:36 -04:00
Sergey Kozlov
94452a94b1
core[patch[: add exceptions propagation test for astream_events v2 (#23159)
**Description:** `astream_events(version="v2")` didn't propagate
exceptions in `langchain-core<=0.2.6`, fixed in the #22916. This PR adds
a unit test to check that exceptions are propagated upwards.

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
2024-06-19 13:00:25 -04:00
Leonid Ganeline
50484be330
prompty: docstring (#23152)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-19 12:50:58 -04:00
chenxi
505a2e8743
fix: MoonshotChat fails when setting the moonshot_api_key through the OS environment. (#23176)
Close #23174

Co-authored-by: tianming <tianming@bytenew.com>
2024-06-19 16:28:24 +00:00
Bagatur
677408bfc9
core[patch]: fix chat history circular import (#23182) 2024-06-19 09:08:36 -07:00
Eugene Yurtsev
883e90d06e
core[patch]: Add an example to the Document schema doc-string (#23131)
Add an example to the document schema
2024-06-19 11:35:30 -04:00
ccurme
2b08e9e265
core[patch]: update test to catch circular imports (#23172)
This raises ImportError due to a circular import:
```python
from langchain_core import chat_history
```

This does not:
```python
from langchain_core import runnables
from langchain_core import chat_history
```

Here we update `test_imports` to run each import in a separate
subprocess. Open to other ways of doing this!
2024-06-19 15:24:38 +00:00
Eugene Yurtsev
ae4c0ed25a
core[patch]: Add documentation to load namespace (#23143)
Document some of the modules within the load namespace
2024-06-19 15:21:41 +00:00
Eugene Yurtsev
a34e650f8b
core[patch]: Add doc-string to document compressor (#23085) 2024-06-19 11:03:49 -04:00
Eugene Yurtsev
1007a715a5
community[patch]: Prevent unit tests from making network requests (#23180)
* Prevent unit tests from making network requests
2024-06-19 14:56:30 +00:00
ccurme
ca798bc6ea
community: move test to integration tests (#23178)
Tests failing on master with

> FAILED
tests/unit_tests/embeddings/test_ovhcloud.py::test_ovhcloud_embed_documents
- ValueError: Request failed with status code: 401, {"message":"Bad
token; invalid JSON"}
2024-06-19 14:39:48 +00:00