Commit Graph

5008 Commits (1318d534afb8e8db4acf69baa9a12c779fec5297)

Author SHA1 Message Date
Carlos André Antunes 20151384d7
fix azure_openai.py: some keys do not exists (#24158)
In some lines its trying to read a key that do not exists yet. In this
cases I changed the direct access to dict.get() method

Thank you for contributing to LangChain!

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2 months ago
blueoom d895614d19
text_splitters: add request parameters for function HTMLHeaderTextSplitter.split_text… (#24178)
**Description:**

The `split_text_from_url` method of `HTMLHeaderTextSplitter` does not
include parameters like `timeout` when using `requests` to send a
request. Therefore, I suggest adding a `kwargs` parameter to the
function, which can be passed as arguments to `requests.get()`
internally, allowing control over the `get` request.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2 months ago
Bagatur 9d0c1d2dc9
docs: specify init_chat_model version (#24274) 2 months ago
Bagatur c9473367b1
langchain[patch]: Release 0.2.8 (#24273) 2 months ago
JP-Ellis f77659463a
core[patch]: allow message utils to work with lcel (#23743)
The functions `convert_to_messages` has had an expansion of the
arguments it can take:

1. Previously, it only could take a `Sequence` in order to iterate over
it. This has been broadened slightly to an `Iterable` (which should have
no other impact).
2. Support for `PromptValue` and `BaseChatPromptTemplate` has been
added. These are generated when combining messages using the overloaded
`+` operator.

Functions which rely on `convert_to_messages` (namely `filter_messages`,
`merge_message_runs` and `trim_messages`) have had the type of their
arguments similarly expanded.

Resolves #23706.

<!--
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
-->

---------

Signed-off-by: JP-Ellis <josh@jpellis.me>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2 months ago
Harold Martin ccdaf14eff
docs: Spell check fixes (#24217)
**Description:** Spell check fixes for docs, comments, and a couple of
strings. No code change e.g. variable names.
**Issue:** none
**Dependencies:** none
**Twitter handle:** hmartin
2 months ago
Leonid Ganeline cacdf96f9c
core docstrings `tracers` update (#24211)
Added missed docstrings. Formatted docstrings to the consistent form.
2 months ago
Leonid Ganeline 36ee083753
core: docstrings `utils` update (#24213)
Added missed docstrings. Formatted docstrings to the consistent form.
2 months ago
thehunmonkgroup e8a21146d3
community[patch]: upgrade default model for ChatAnyscale (#24232)
Old default `meta-llama/Llama-2-7b-chat-hf` no longer supported.
2 months ago
Bagatur 620b118c70
core[patch]: Release 0.2.19 (#24272) 2 months ago
ccurme 888fbc07b5
core[patch]: support passing `args_schema` through `as_tool` (#24269)
Note: this allows the schema to be passed in positionally.

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableLambda


class Add(BaseModel):
    """Add two integers together."""

    a: int = Field(..., description="First integer")
    b: int = Field(..., description="Second integer")


def add(input: dict) -> int:
    return input["a"] + input["b"]


runnable = RunnableLambda(add)
as_tool = runnable.as_tool(Add)
as_tool.args_schema.schema()
```
```
{'title': 'Add',
 'description': 'Add two integers together.',
 'type': 'object',
 'properties': {'a': {'title': 'A',
   'description': 'First integer',
   'type': 'integer'},
  'b': {'title': 'B', 'description': 'Second integer', 'type': 'integer'}},
 'required': ['a', 'b']}
```
2 months ago
ccurme ab2d7821a7
fireworks[patch]: use firefunction-v2 in standard tests (#24264) 2 months ago
ccurme 6fc7610b1c
standard-tests[patch]: update test_bind_runnables_as_tools (#24241)
Reduce number of tool arguments from two to one.
2 months ago
Bagatur 0da5078cad
langchain[minor]: Generic configurable model (#23419)
alternative to
[23244](https://github.com/langchain-ai/langchain/pull/23244). allows
you to use chat model declarative methods

![Screenshot 2024-06-25 at 1 07 10
PM](https://github.com/langchain-ai/langchain/assets/22008038/910d1694-9b7b-46bc-bc2e-3792df9321d6)
2 months ago
Bagatur d0728b0ba0
core[patch]: add tool name to tool message (#24243)
Copying current ToolNode behavior
2 months ago
Bagatur 5c3e2612da
core[patch]: Release 0.2.18 (#24230) 2 months ago
Bagatur 65321bf975
core[patch]: fix ToolCall "type" when streaming (#24218) 2 months ago
Anush a653b209ba
qdrant: test new QdrantVectorStore (#24165)
## Description

This PR adds integration tests to follow up on #24164.

By default, the tests use an in-memory instance.

To run the full suite of tests, with both in-memory and Qdrant server:

```
$ docker run -p 6333:6333 qdrant/qdrant

$ make test

$ make integration_test
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Roman Solomatin f071581aea
openai[patch]: update openai params (#23691)
**Description:** Explicitly add parameters from openai API



- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Leonid Ganeline f0a7581b50
milvus: docstring (#23151)
Added missed docstrings. Format docstrings to the consistent format
(used in the API Reference)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Christian D. Glissov 474b88326f
langchain_qdrant: Added method "_asimilarity_search_with_relevance_scores" to Qdrant class (#23954)
I stumbled upon a bug that led to different similarity scores between
the async and sync similarity searches with relevance scores in Qdrant.
The reason being is that _asimilarity_search_with_relevance_scores is
missing, this makes langchain_qdrant use the method of the vectorstore
baseclass leading to drastically different results.

To illustrate the magnitude here are the results running an identical
search in a test vectorstore.

Output of asimilarity_search_with_relevance_scores:
[0.9902903374601824, 0.9472135924938804, 0.8535534011299859]

Output of similarity_search_with_relevance_scores:
[0.9805806749203648, 0.8944271849877607, 0.7071068022599718]

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Bagatur bdc03997c9
standard-tests[patch]: check for ToolCall["type"] (#24209) 2 months ago
Miroslav aee55eda39
community: Skip Login to HuggubgFaceHub when token is not set (#21561)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceEndpoint**: "Skip Login to HuggingFaceHub"
  - Where:  langchain, community, llm, huggingface_endpoint
 


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Skip login to huggingface hub when when
`huggingfacehub_api_token` is not set. This is needed when using custom
`endpoint_url` outside of HuggingFaceHub.
- **Issue:** the issue # it fixes
https://github.com/langchain-ai/langchain/issues/20342 and
https://github.com/langchain-ai/langchain/issues/19685
    - **Dependencies:** None


- [ ] **Add tests and docs**: 
  1. Tested with locally available TGI endpoint
  2.  Example Usage
```python
from langchain_community.llms import HuggingFaceEndpoint

llm = HuggingFaceEndpoint(
    endpoint_url='http://localhost:8080',
    server_kwargs={
        "headers": {"Content-Type": "application/json"}
    }
)
resp = llm.invoke("Tell me a joke")
print(resp)
```
 Also tested against HF Endpoints
 ```python
 from langchain_community.llms import HuggingFaceEndpoint
huggingfacehub_api_token = "hf_xyz"
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
    huggingfacehub_api_token=huggingfacehub_api_token,
    repo_id=repo_id,
)
resp = llm.invoke("Tell me a joke")
print(resp)
 ```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Anush d09dda5a08
qdrant: Bump patch version (#24168)
# Description

To release a new version of `langchain-qdrant` after #24165 and #24166.
2 months ago
Bagatur 12950cc602
standard-tests[patch]: improve runnable tool description (#24210) 2 months ago
Erick Friis e8ee781a42
ibm: move to external repo (#24208) 2 months ago
Bagatur 02e71cebed
together[patch]: Release 0.1.4 (#24205) 2 months ago
Bagatur 259d4d2029
anthropic[patch]: Release 0.1.20 (#24204) 2 months ago
Bagatur 3aed74a6fc
fireworks[patch]: Release 0.1.5 (#24203) 2 months ago
Bagatur 13b0d7ec8f
openai[patch]: Release 0.1.16 (#24202) 2 months ago
Bagatur 71cd6e6feb
groq[patch]: Release 0.1.7 (#24201) 2 months ago
Bagatur 99054e19eb
mistralai[patch]: Release 0.1.10 (#24200) 2 months ago
Bagatur 7a1321e2f9
ibm[patch]: Release 0.1.10 (#24199) 2 months ago
Bagatur cb5031f22f
integrations[patch]: require core >=0.2.17 (#24207) 2 months ago
Nithish Raghunandanan f1618ec540
couchbase: Add standard and semantic caches (#23607)
Thank you for contributing to LangChain!

**Description:** Add support for caching (standard + semantic) LLM
responses using Couchbase


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Eugene Yurtsev 8d82a0d483
core[patch]: Mark GraphVectorStore as beta (#24195)
* This PR marks graph vectorstore as beta
2 months ago
Bagatur 0a1e475a30
core[patch]: Release 0.2.17 (#24189) 2 months ago
Bagatur 6166ea67a8
core[minor]: rename ToolMessage.raw_output -> artifact (#24185) 2 months ago
Jean Nshuti d77d9bfc00
community[patch]: update typo document content returned from semanticscholar (#24175)
Update "astract" -> abstract
2 months ago
Leonid Ganeline aa3e3cfa40
core[patch]: docstrings `runnables` update (#24161)
Added missed docstrings. Formatted docstrings to the consistent form.
2 months ago
Tomaz Bratanic d3a2b9fae0
Fix neo4j type error on missing constraint information (#24177)
If you use `refresh_schema=False`, then the metadata constraint doesn't
exist. ATM, we used default `None` in the constraint check, but then
`any` fails because it can't iterate over None value
2 months ago
Anush 7014d07cab
qdrant: new Qdrant implementation (#24164) 2 months ago
Xander Dumaine 35784d1c33
langchain[minor]: add document_variable_name to create_stuff_documents_chain (#24083)
- **Description:** `StuffDocumentsChain` uses `LLMChain` which is
deprecated by langchain runnables. `create_stuff_documents_chain` is the
replacement, but needs support for `document_variable_name` to allow
multiple uses of the chain within a longer chain.
- **Issue:** none
- **Dependencies:** none
2 months ago
Eugene Yurtsev 8858846607
milvus[patch]: Fix Milvus vectorstore for newer versions of langchain-core (#24152)
Fix for: https://github.com/langchain-ai/langchain/issues/24116

This keeps the old behavior of add_documents and add_texts
2 months ago
thedavgar ffe6ca986e
community: Fix Bug in Azure Search Vectorstore search asyncronously (#24081)
Thank you for contributing to LangChain!

**Description**:
This PR fixes a bug described in the issue in #24064, when using the
AzureSearch Vectorstore with the asyncronous methods to do search which
is also the method used for the retriever. The proposed change includes
just change the access of the embedding as optional because is it not
used anywhere to retrieve documents. Actually, the syncronous methods of
retrieval do not use the embedding neither.

With this PR the code given by the user in the issue works.

```python
vectorstore = AzureSearch(
    azure_search_endpoint=os.getenv("AI_SEARCH_ENDPOINT_SECRET"),
    azure_search_key=os.getenv("AI_SEARCH_API_KEY"),
    index_name=os.getenv("AI_SEARCH_INDEX_NAME_SECRET"),
    fields=fields,
    embedding_function=encoder,
)

retriever = vectorstore.as_retriever(search_type="hybrid", k=2)

await vectorstore.avector_search("what is the capital of France")
await retriever.ainvoke("what is the capital of France")
```

**Issue**:
The Azure Search Vectorstore is not working when searching for documents
with asyncronous methods, as described in issue #24064

**Dependencies**:
There are no extra dependencies required for this change.

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2 months ago
Anush 7790d67f94
qdrant: New sparse embeddings provider interface - PART 1 (#24015)
## Description

This PR introduces a new sparse embedding provider interface to work
with the new Qdrant implementation that will follow this PR.

Additionally, an implementation of this interface is provided with
https://github.com/qdrant/fastembed.

This PR will be followed by
https://github.com/Anush008/langchain/pull/3.
2 months ago
Erick Friis 1132fb801b
core: release 0.2.16 (#24159) 2 months ago
Nuno Campos 1d37aa8403
core: Remove extra newline (#24157) 2 months ago
ccurme cb95198398
standard-tests[patch]: add tests for runnables as tools and streaming usage metadata (#24153) 2 months ago
Bagatur 8d100c58de
core[patch]: Tool accept RunnableConfig (#24143)
Relies on #24038

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Bagatur 5fd1e67808
core[minor], integrations...[patch]: Support ToolCall as Tool input and ToolMessage as Tool output (#24038)
Changes:
- ToolCall, InvalidToolCall and ToolCallChunk can all accept a "type"
parameter now
- LLM integration packages add "type" to all the above
- Tool supports ToolCall inputs that have "type" specified
- Tool outputs ToolMessage when a ToolCall is passed as input
- Tools can separately specify ToolMessage.content and
ToolMessage.raw_output
- Tools emit events for validation errors (using on_tool_error and
on_tool_end)

Example:
```python
@tool("structured_api", response_format="content_and_raw_output")
def _mock_structured_tool_with_raw_output(
    arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> Tuple[str, dict]:
    """A Structured Tool"""
    return f"{arg1} {arg2}", {"arg1": arg1, "arg2": arg2, "arg3": arg3}


def test_tool_call_input_tool_message_with_raw_output() -> None:
    tool_call: Dict = {
        "name": "structured_api",
        "args": {"arg1": 1, "arg2": True, "arg3": {"img": "base64string..."}},
        "id": "123",
        "type": "tool_call",
    }
    expected = ToolMessage("1 True", raw_output=tool_call["args"], tool_call_id="123")
    tool = _mock_structured_tool_with_raw_output
    actual = tool.invoke(tool_call)
    assert actual == expected

    tool_call.pop("type")
    with pytest.raises(ValidationError):
        tool.invoke(tool_call)

    actual_content = tool.invoke(tool_call["args"])
    assert actual_content == expected.content
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2 months ago
Bagatur eeb996034b
core[patch]: Release 0.2.15 (#24149) 2 months ago
Nuno Campos 03fba07d15
core[patch]: Update styles for mermaid graphs (#24147) 2 months ago
ccurme 8ee8ca7c83
core[patch]: propagate `parse_docstring` to tool decorator (#24123)
Disabled by default.

```python
from langchain_core.tools import tool

@tool(parse_docstring=True)
def foo(bar: str, baz: int) -> str:
    """The foo.

    Args:
        bar: this is the bar
        baz: this is the baz
    """
    return bar


foo.args_schema.schema()
```
```json
{
  "title": "fooSchema",
  "description": "The foo.",
  "type": "object",
  "properties": {
    "bar": {
      "title": "Bar",
      "description": "this is the bar",
      "type": "string"
    },
    "baz": {
      "title": "Baz",
      "description": "this is the baz",
      "type": "integer"
    }
  },
  "required": [
    "bar",
    "baz"
  ]
}
```
2 months ago
Jacob Lee f1f1f75782
community[patch]: Make AzureML endpoint return AI messages for type assistant (#24085) 2 months ago
Eugene Yurtsev 4ba14adec6
core[patch]: Clean up indexing test code (#24139)
Refactor the code to use the existing InMemroyVectorStore.

This change is needed for another PR that moves some of the imports
around (and messes up the mock.patch in this file)
2 months ago
Atul R 457677c1b7
community: Fixes use of ImagePromptTemplate with Ollama (#24140)
Description: ImagePromptTemplate for Multimodal llms like llava when
using Ollama
Twitter handle: https://x.com/a7ulr

Details:

When using llava models / any ollama multimodal llms and passing images
in the prompt as urls, langchain breaks with this error.

```python
image_url_components = image_url.split(",")
                           ^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'split'
```

From the looks of it, there was bug where the condition did check for a
`url` field in the variable but missed to actually assign it.

This PR fixes ImagePromptTemplate for Multimodal llms like llava when
using Ollama specifically.

@hwchase17
2 months ago
Matt 8327925ab7
community:support additional Azure Search Options (#24134)
- **Description:** Support additional kwargs options for the Azure
Search client (Described here
https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/core/azure-core/README.md#configurations)
    - **Issue:** N/A
    - **Dependencies:** No additional Dependencies

---------
2 months ago
ccurme 122e80e04d
core[patch]: add versionadded to `as_tool` (#24138) 2 months ago
Erick Friis c4417ea93c
core: release 0.2.14, remove poetry 1.7 incompatible flag from root (#24137) 2 months ago
Isaac Francisco 7a62d3dbd6
standard-tests[patch]: test that bind_tools can accept regular python function (#24135) 2 months ago
Nuno Campos 2428984205
core: Add metadata to graph json repr (#24131)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2 months ago
Harley Gross ea3cd1ebba
community[minor]: added support for C in RecursiveCharacterTextSplitter (#24091)
Description: Added support for C in RecursiveCharacterTextSplitter by
reusing the separators for C++
2 months ago
Nuno Campos 3e454d7568
core: fix docstring (#24129) 2 months ago
Eugene Yurtsev 08638ccc88
community[patch]: QianfanLLMEndpoint fix type information for the keys (#24128)
Fix for issue: https://github.com/langchain-ai/langchain/issues/24126
2 months ago
Nuno Campos ee3fe20af4
core: mermaid: Render metadata key-value pairs when drawing mermaid graph (#24103)
- if node is runnable binding with metadata attached
2 months ago
Eugene Yurtsev 1e7d8ba9a6
ci[patch]: Update community linter to provide a helpful error message (#24127)
Update community import linter to explain what's wrong
2 months ago
maang-h 16e178a8c2
docs: Add MiniMaxChat docstrings (#24026)
- **Description:** Add MiniMaxChat rich docstrings.
- **Issue:** the issue #22296
2 months ago
Christophe Bornet 5fc5ef2b52
community[minor]: Add graph store extractors (#24065)
This adds an extractor interface and an implementation for HTML pages.
Extractors are used to create GraphVectorStore Links on loaded content.

**Twitter handle:** cbornet_
2 months ago
maang-h 9bcf8f867d
docs: Add SQLChatMessageHistory docstring (#23978)
- **Description:** Add SQLChatMessageHistory docstring.
- **Issue:** the issue #21983

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2 months ago
Rafael Pereira 092e9ee0e6
community[minor]: Neo4j Fixed similarity docs (#23913)
**Description:** There was missing some documentation regarding the
`filter` and `params` attributes in similarity search methods.

---------

Co-authored-by: rpereira <rafael.pereira@criticalsoftware.com>
2 months ago
Eugene Yurtsev dc131ac42a
core[minor]: Add dispatching for custom events (#24080)
This PR allows dispatching adhoc events for a given run.

# Context

This PR allows users to send arbitrary data to the callback system and
to the astream events API from within a given runnable. This can be
extremely useful to surface custom information to end users about
progress etc.

Integration with langsmith tracer will be done separately since the data
cannot be currently visualized. It'll be accommodated using the events
attribute of the Run

# Examples with astream events

```python
from langchain_core.callbacks import adispatch_custom_event
from langchain_core.tools import tool

@tool
async def foo(x: int) -> int:
    """Foo"""
    await adispatch_custom_event("event1", {"x": x})
    await adispatch_custom_event("event2", {"x": x})
    return x + 1

async for event in foo.astream_events({'x': 1}, version='v2'):
    print(event)
```

```python
{'event': 'on_tool_start', 'data': {'input': {'x': 1}}, 'name': 'foo', 'tags': [], 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'metadata': {}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_tool_end', 'data': {'output': 2}, 'run_id': 'fd6fb7a7-dd37-4191-962c-e43e245909f6', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}
```

```python
from langchain_core.callbacks import adispatch_custom_event
from langchain_core.runnables import RunnableLambda

@RunnableLambda
async def foo(x: int) -> int:
    """Foo"""
    await adispatch_custom_event("event1", {"x": x})
    await adispatch_custom_event("event2", {"x": x})
    return x + 1

async for event in foo.astream_events(1, version='v2'):
    print(event)
```

```python
{'event': 'on_chain_start', 'data': {'input': 1}, 'name': 'foo', 'tags': [], 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'metadata': {}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'event1', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_custom_event', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'event2', 'tags': [], 'metadata': {}, 'data': {'x': 1}, 'parent_ids': []}
{'event': 'on_chain_stream', 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'foo', 'tags': [], 'metadata': {}, 'data': {'chunk': 2}, 'parent_ids': []}
{'event': 'on_chain_end', 'data': {'output': 2}, 'run_id': 'ce2beef2-8608-49ea-8eba-537bdaafb8ec', 'name': 'foo', 'tags': [], 'metadata': {}, 'parent_ids': []}
```

# Examples with handlers 

This is copy pasted from unit tests

```python
    class CustomCallbackManager(BaseCallbackHandler):
        def __init__(self) -> None:
            self.events: List[Any] = []

        def on_custom_event(
            self,
            name: str,
            data: Any,
            *,
            run_id: UUID,
            tags: Optional[List[str]] = None,
            metadata: Optional[Dict[str, Any]] = None,
            **kwargs: Any,
        ) -> None:
            assert kwargs == {}
            self.events.append(
                (
                    name,
                    data,
                    run_id,
                    tags,
                    metadata,
                )
            )

    callback = CustomCallbackManager()

    run_id = uuid.UUID(int=7)

    @RunnableLambda
    def foo(x: int, config: RunnableConfig) -> int:
        dispatch_custom_event("event1", {"x": x})
        dispatch_custom_event("event2", {"x": x}, config=config)
        return x

    foo.invoke(1, {"callbacks": [callback], "run_id": run_id})

    assert callback.events == [
        ("event1", {"x": 1}, UUID("00000000-0000-0000-0000-000000000007"), [], {}),
        ("event2", {"x": 1}, UUID("00000000-0000-0000-0000-000000000007"), [], {}),
    ]
```
3 months ago
Erick Friis 71c2221f8c
openai: release 0.1.15 (#24097) 3 months ago
Erick Friis 6ea6f9f7bc
core: release 0.2.13 (#24096) 3 months ago
ccurme 975b6129f6
core[patch]: support conversion of runnables to tools (#23992)
Open to other thoughts on UX.

string input:
```python
as_tool = retriever.as_tool()
as_tool.invoke("cat")  # [Document(...), ...]
```

typed dict input:
```python
class Args(TypedDict):
    key: int

def f(x: Args) -> str:
    return str(x["key"] * 2)

as_tool = RunnableLambda(f).as_tool(
    name="my tool",
    description="description",  # name, description are inferred if not supplied
)
as_tool.invoke({"key": 3})  # "6"
```

for untyped dict input, allow specification of parameters + types
```python
def g(x: Dict[str, Any]) -> str:
    return str(x["key"] * 2)

as_tool = RunnableLambda(g).as_tool(arg_types={"key": int})
result = as_tool.invoke({"key": 3})  # "6"
```

Passing the `arg_types` is slightly awkward but necessary to ensure tool
calls populate parameters correctly:
```python
from typing import Any, Dict

from langchain_core.runnables import RunnableLambda
from langchain_openai import ChatOpenAI


def f(x: Dict[str, Any]) -> str:
    return str(x["key"] * 2)

runnable = RunnableLambda(f)
as_tool = runnable.as_tool(arg_types={"key": int})

llm = ChatOpenAI().bind_tools([as_tool])

result = llm.invoke("Use the tool on 3.")
tool_call = result.tool_calls[0]
args = tool_call["args"]
assert args == {"key": 3}

as_tool.run(args)
```

Contrived (?) example with langgraph agent as a tool:
```python
from typing import List, Literal
from typing_extensions import TypedDict

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent


llm = ChatOpenAI(temperature=0)


def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2


agent_1 = create_react_agent(llm, [magic_function])


class Message(TypedDict):
    role: Literal["human"]
    content: str

agent_tool = agent_1.as_tool(
    arg_types={"messages": List[Message]},
    name="Jeeves",
    description="Ask Jeeves.",
)

agent_2 = create_react_agent(llm, [agent_tool])
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
3 months ago
Bagatur 6928f4c438
core[minor]: Add ToolMessage.raw_output (#23994)
Decisions to discuss:
1.  is a new attr needed or could additional_kwargs be used for this
2. is raw_output a good name for this attr
3. should raw_output default to {} or None
4. should raw_output be included in serialization
5. do we need to update repr/str to  exclude raw_output
3 months ago
Eugene Yurtsev c4e149d4f1
community[patch]: Add linter to catch @root_validator (#24070)
- Add linter to prevent further usage of vanilla root validator
- Udpate remaining root validators
3 months ago
ccurme 9c6efadec3
community[patch]: propagate cost information to OpenAI callback (#23996)
This is enabled following
https://github.com/langchain-ai/langchain/pull/22716.
3 months ago
William FH 1e1fd30def
[Core] Fix fstring in logger warning (#24043) 3 months ago
Ethan Yang 13855ef0c3
[HuggingFace Pipeline] add streaming support (#23852) 3 months ago
Nuno Campos 859e434932
core: Speed up json parse for large strings (#24036)
for a large string:
- old 4.657918874989264
- new 0.023724667000351474
3 months ago
Nuno Campos 160fc7f246
core: Move json parsing in base chat model / output parser to bg thread (#24031)
- add version of AIMessageChunk.__add__ that can add many chunks,
instead of only 2
- In agenerate_from_stream merge and parse chunks in bg thread
- In output parse base classes do more work in bg threads where
appropriate

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
3 months ago
Nuno Campos 73966e693c
openai: Create msg chunk in bg thread (#24032)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
3 months ago
Erick Friis 007c5a85d5
multiple: use modern installer in poetry (#23998) 3 months ago
Erick Friis e80c150c44
community: release 0.2.7 (prev was langchain) (#23997) 3 months ago
Erick Friis 9f8fd08955
community: release 0.2.7 (#23993) 3 months ago
Erick Friis bedd893cd1
core: release 0.2.12 (#23991) 3 months ago
Bagatur 1e957c0c23
docs: rm discord (#23985) 3 months ago
Eugene Yurtsev f765e8fa9d
core[minor],community[patch],standard-tests[patch]: Move InMemoryImplementation to langchain-core (#23986)
This PR moves the in memory implementation to langchain-core.

* The implementation remains importable from langchain-community.
* Supporting utilities are marked as private for now.
3 months ago
Eugene Yurtsev aa8c9bb4a9
community[patch]: Add constraint for pdfminer.six to unbreak CI (#23988)
Something changed in pdfminer six. This PR unreaks CI without
fixing the underlying PDF parser.
3 months ago
Eugene Yurtsev 2c180d645e
core[minor],community[minor]: Upgrade all @root_validator() to @pre_init (#23841)
This PR introduces a @pre_init decorator that's a @root_validator(pre=True) but with all the defaults populated!
3 months ago
Eugene Yurtsev 9787552b00
core[patch]: Use InMemoryChatMessageHistory in unit tests (#23916)
Update unit test to use the existing implementation of chat message
history
3 months ago
Rajendra Kadam 8b84457b17
community[minor]: Support PGVector in PebbloRetrievalQA (#23874)
- **Description:** Support PGVector in PebbloRetrievalQA
  - Identity and Semantic Enforcement support for PGVector
  - Refactor Vectorstore validation and name check
  - Clear the overridden identity and semantic enforcement filters
- **Issue:** NA
- **Dependencies:** NA
- **Tests**: NA(already added)
-  **Docs**: Updated
- **Twitter handle:** [@Raj__725](https://twitter.com/Raj__725)
3 months ago
Eugene Yurtsev e0186df56b
core[patch]: Clarify upsert response semantics (#23921) 3 months ago
Rajendra Kadam ee8aa54f53
community[patch]: Fix source path mismatch in PebbloSafeLoader (#23857)
**Description:** Fix for source path mismatch in PebbloSafeLoader. The
fix involves storing the full path in the doc metadata in VectorDB
**Issue:** NA, caught in internal testing
**Dependencies:** NA
**Add tests**:  Updated tests
3 months ago
Eugene Yurtsev 5b7d5f7729
core[patch]: Add comment to clarify aadd_documents (#23920)
Add comment to clarify how add documents works
3 months ago
Eugene Yurtsev e0889384d9
standard-tests[minor]: add unit tests for testing get_by_ids, aget_by_ids, upsert, aupsert_by_ids (#23919)
These standard unit tests provide standard tests for functionality
introduced in these PRs:

* https://github.com/langchain-ai/langchain/pull/23774
* https://github.com/langchain-ai/langchain/pull/23594
3 months ago
ccurme 74c7198906
core, anthropic[patch]: support streaming tool calls when function has no arguments (#23915)
resolves https://github.com/langchain-ai/langchain/issues/23911

When an AIMessageChunk is instantiated, we attempt to parse tool calls
off of the tool_call_chunks.

Here we add a special-case to this parsing, where `""` will be parsed as
`{}`.

This is a reaction to how Anthropic streams tool calls in the case where
a function has no arguments:
```
{'id': 'toolu_01J8CgKcuUVrMqfTQWPYh64r', 'input': {}, 'name': 'magic_function', 'type': 'tool_use', 'index': 1}
{'partial_json': '', 'type': 'tool_use', 'index': 1}
```
The `partial_json` does not accumulate to a valid json string-- most
other providers tend to emit `"{}"` in this case.
3 months ago
Mateusz Szewczyk 902b57d107
IBM: Added WatsonxChat passing params to invoke method (#23758)
Thank you for contributing to LangChain!

- [x] **PR title**: "IBM: Added WatsonxChat to chat models preview,
update passing params to invoke method"


- [x] **PR message**: 
- **Description:** Added WatsonxChat passing params to invoke method,
added integration tests
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
3 months ago
ccurme 1f5a163f42
langchain[patch]: deprecate QAGenerationChain (#23730) 3 months ago
ccurme 25de47878b
langchain[patch]: deprecate AnalyzeDocumentChain (#23769) 3 months ago
Christophe Bornet 42d049f618
core[minor]: Add Graph Store component (#23092)
This PR introduces a GraphStore component. GraphStore extends
VectorStore with the concept of links between documents based on
document metadata. This allows linking documents based on a variety of
techniques, including common keywords, explicit links in the content,
and other patterns.

This works with existing Documents, so it’s easy to extend existing
VectorStores to be used as GraphStores. The interface can be implemented
for any Vector Store technology that supports metadata, not only graph
DBs.

When retrieving documents for a given query, the first level of search
is done using classical similarity search. Next, links may be followed
using various traversal strategies to get additional documents. This
allows documents to be retrieved that aren’t directly similar to the
query but contain relevant information.

2 retrieving methods are added to the VectorStore ones : 
* traversal_search which gets all linked documents up to a certain depth
* mmr_traversal_search which selects linked documents using an MMR
algorithm to have more diverse results.

If a depth of retrieval of 0 is used, GraphStore is effectively a
VectorStore. It enables an easy transition from a simple VectorStore to
GraphStore by adding links between documents as a second step.

An implementation for Apache Cassandra is also proposed.

See
https://github.com/datastax/ragstack-ai/blob/main/libs/knowledge-store/notebooks/astra_support.ipynb
for a notebook explaining how to use GraphStore and that shows that it
can answer correctly to questions that a simple VectorStore cannot.

**Twitter handle:** _cbornet
3 months ago
Leonid Ganeline 77f5fc3d55
core: docstrings `load` (#23787)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
Eugene Yurtsev 6f08e11d7c
core[minor]: add upsert, streaming_upsert, aupsert, astreaming_upsert methods to the VectorStore abstraction (#23774)
This PR rolls out part of the new proposed interface for vectorstores
(https://github.com/langchain-ai/langchain/pull/23544) to existing store
implementations.

The PR makes the following changes:

1. Adds standard upsert, streaming_upsert, aupsert, astreaming_upsert
methods to the vectorstore.
2. Updates `add_texts` and `aadd_texts` to be non required with a
default implementation that delegates to `upsert` and `aupsert` if those
have been implemented. The original `add_texts` and `aadd_texts` methods
are problematic as they spread object specific information across
document and **kwargs. (e.g., ids are not a part of the document)
3. Adds a default implementation to `add_documents` and `aadd_documents`
that delegates to `upsert` and `aupsert` respectively.
4. Adds standard unit tests to verify that a given vectorstore
implements a correct read/write API.

A downside of this implementation is that it creates `upsert` with a
very similar signature to `add_documents`.
The reason for introducing `upsert` is to:
* Remove any ambiguities about what information is allowed in `kwargs`.
Specifically kwargs should only be used for information common to all
indexed data. (e.g., indexing timeout).
*Allow inheriting from an anticipated generalized interface for indexing
that will allow indexing `BaseMedia` (i.e., allow making a vectorstore
for images/audio etc.)
 
`add_documents` can be deprecated in the future in favor of `upsert` to
make sure that users have a single correct way of indexing content.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
3 months ago
G Sreejith 3c752238c5
core[patch]: Fix typo in docstring (graphm -> graph) (#23910)
Changes has been as per the request
Replaced graphm with graph
3 months ago
Leonid Ganeline 12c92b6c19
core: docstrings `outputs` (#23889)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
Leonid Ganeline 1eca98ec56
core: docstrings `prompts` (#23890)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
Philippe PRADOS 289960bc60
community[patch]: Redis.delete should be a regular method not a static method (#23873)
The `langchain_common.vectostore.Redis.delete()` must not be a
`@staticmethod`.

With the current implementation, it's not possible to have multiple
instances of Redis vectorstore because all versions must share the
`REDIS_URL`.

It's not conform with the base class.
3 months ago
Mohammad Mohtashim 2274d2b966
core[patch]: Accounting for Optional Input Variables in BasePromptTemplate (#22851)
**Description**: After reviewing the prompts API, it is clear that the
only way a user can explicitly mark an input variable as optional is
through the `MessagePlaceholder.optional` attribute. Otherwise, the user
must explicitly pass in the `input_variables` expected to be used in the
`BasePromptTemplate`, which will be validated upon execution. Therefore,
to semantically handle a `MessagePlaceholder` `variable_name` as
optional, we will treat the `variable_name` of `MessagePlaceholder` as a
`partial_variable` if it has been marked as optional. This approach
aligns with how the `variable_name` of `MessagePlaceholder` is already
handled
[here](https://github.com/keenborder786/langchain/blob/optional_input_variables/libs/core/langchain_core/prompts/chat.py#L991).
Additionally, an attribute `optional_variable` has been added to
`BasePromptTemplate`, and the `variable_name` of `MessagePlaceholder` is
also made part of `optional_variable` when marked as optional.

Moreover, the `get_input_schema` method has been updated for
`BasePromptTemplate` to differentiate between optional and non-optional
variables.

**Issue**: #22832, #21425

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
3 months ago
Klaudia Lemiec a2082bc1f8
docs: Arxiv docs update (#23871)
- [X] **PR title**
- [X] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Update of docstrings and docpages
- **Issue:**
[22866](https://github.com/langchain-ai/langchain/issues/22866)

- [X] **Add tests and docs**

- [X] **Lint and test**
3 months ago
André Quintino 99b1467b63
community: add support for 'cloud' parameter in JiraAPIWrapper (#23057)
- **Description:** Enhance JiraAPIWrapper to accept the 'cloud'
parameter through an environment variable. This update allows more
flexibility in configuring the environment for the Jira API.
 - **Twitter handle:** Andre_Q_Pereira

---------

Co-authored-by: André Quintino <andre.quintino@tui.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
3 months ago
wenngong b1e90b3075
community: add model_name param valid for GPT4AllEmbeddings (#23867)
Description: add model_name param valid for GPT4AllEmbeddings

Issue: #23863 #22819

---------

Co-authored-by: gongwn1 <gongwn1@lenovo.com>
3 months ago
volodymyr-memsql a4eb6d0fb1
community: add SingleStoreDB semantic cache (#23218)
This PR adds a `SingleStoreDBSemanticCache` class that implements a
cache based on SingleStoreDB vector store, integration tests, and a
notebook example.

Additionally, this PR contains minor changes to SingleStoreDB vector
store:
 - change add texts/documents methods to return a list of inserted ids
 - implement delete(ids) method to delete documents by list of ids
 - added drop() method to drop a correspondent database table
- updated integration tests to use and check functionality implemented
above


CC: @baskaryan, @hwchase17

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
3 months ago
Igor Drozdov bb597b1286
feat(community): add bind_tools function for ChatLiteLLM (#23823)
It's a follow-up to https://github.com/langchain-ai/langchain/pull/23765

Now the tools can be bound by calling `bind_tools`

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
# tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]
tools = [GetWeather, GetPopulation]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229").bind_tools(tools)
ai_msg = llm.invoke(prompt)
print(ai_msg.tool_calls)
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
3 months ago
Jiejun Tan 2be66a38d8
huggingface: Fix huggingface tei support (#22653)
Update former pull request:
https://github.com/langchain-ai/langchain/pull/22595.

Modified
`libs/partners/huggingface/langchain_huggingface/embeddings/huggingface_endpoint.py`,
where the API call function does not match current [Text Embeddings
Inference
API](https://huggingface.github.io/text-embeddings-inference/#/Text%20Embeddings%20Inference/embed).
One example is:
```json
{
  "inputs": "string",
  "normalize": true,
  "truncate": false
}
```
Parameters in `_model_kwargs` are not passed properly in the latest
version. By the way, the issue *[why cause 413?
#50](https://github.com/huggingface/text-embeddings-inference/issues/50)*
might be solved.
3 months ago
Eugene Yurtsev 9ccc4b1616
core[patch]: Fix logic in BaseChatModel that processes the llm string that is used as a key for caching chat models responses (#23842)
This PR should fix the following issue:
https://github.com/langchain-ai/langchain/issues/23824
Introduced as part of this PR:
https://github.com/langchain-ai/langchain/pull/23416

I am unable to reproduce the issue locally though it's clear that we're
getting a `serialized` object which is not a dictionary somehow.

The test below passes for me prior to the PR as well

```python

def test_cache_with_sqllite() -> None:
    from langchain_community.cache import SQLiteCache

    from langchain_core.globals import set_llm_cache

    cache = SQLiteCache(database_path=".langchain.db")
    set_llm_cache(cache)
    chat_model = FakeListChatModel(responses=["hello", "goodbye"], cache=True)
    assert chat_model.invoke("How are you?").content == "hello"
    assert chat_model.invoke("How are you?").content == "hello"
```
3 months ago
Vadym Barda 9bb623381b
core[minor]: update conversion utils to handle RemoveMessage (#23840) 3 months ago
Eugene Yurtsev 4ab78572e7
core[patch]: Speed up unit tests for imports (#23837)
Speed up unit tests for imports
3 months ago
Nico Puhlmann 4a15fce516
langchain: update declarative_base import (#20056)
**Description**: The ``declarative_base()`` function is now available as
sqlalchemy.orm.declarative_base(). (depreca ted since: 2.0) (Background
on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)

---------

Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
3 months ago
Théo Deschamps 39b19cf764
core[patch]: extract input variables for `path` and `detail` keys in order to format an `ImagePromptTemplate` (#22613)
- Description: Add support for `path` and `detail` keys in
`ImagePromptTemplate`. Previously, only variables associated with the
`url` key were considered. This PR allows for the inclusion of a local
image path and a detail parameter as input to the format method.
- Issues:
    - fixes #20820 
    - related to #22024 
- Dependencies: None
- Twitter handle: @DeschampsTho5

---------

Co-authored-by: tdeschamps <tdeschamps@kameleoon.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
3 months ago
Bagatur a4798802ef
cli[patch]: ruff 0.5 (#23833) 3 months ago
Leonid Ganeline 55f6f91f17
core[patch]: docstrings `output_parsers` (#23825)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
Philippe PRADOS 26cee2e878
partners[patch]: MongoDB vectorstore to return and accept string IDs (#23818)
The mongdb have some errors.
- `add_texts() -> List` returns a list of `ObjectId`, and not a list of
string
- `delete()` with `id` never remove chunks.

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
3 months ago
Ikko Eltociear Ashimine 75734fbcf1
community: fix typo in unit tests for test_zenguard.py (#23819)
enviroment -> environment


- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"
3 months ago
Bagatur a0c2281540
infra: update mypy 1.10, ruff 0.5 (#23721)
```python
"""python scripts/update_mypy_ruff.py"""
import glob
import tomllib
from pathlib import Path

import toml
import subprocess
import re

ROOT_DIR = Path(__file__).parents[1]


def main():
    for path in glob.glob(str(ROOT_DIR / "libs/**/pyproject.toml"), recursive=True):
        print(path)
        with open(path, "rb") as f:
            pyproject = tomllib.load(f)
        try:
            pyproject["tool"]["poetry"]["group"]["typing"]["dependencies"]["mypy"] = (
                "^1.10"
            )
            pyproject["tool"]["poetry"]["group"]["lint"]["dependencies"]["ruff"] = (
                "^0.5"
            )
        except KeyError:
            continue
        with open(path, "w") as f:
            toml.dump(pyproject, f)
        cwd = "/".join(path.split("/")[:-1])
        completed = subprocess.run(
            "poetry lock --no-update; poetry install --with typing; poetry run mypy . --no-color",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )
        logs = completed.stdout.split("\n")

        to_ignore = {}
        for l in logs:
            if re.match("^(.*)\:(\d+)\: error:.*\[(.*)\]", l):
                path, line_no, error_type = re.match(
                    "^(.*)\:(\d+)\: error:.*\[(.*)\]", l
                ).groups()
                if (path, line_no) in to_ignore:
                    to_ignore[(path, line_no)].append(error_type)
                else:
                    to_ignore[(path, line_no)] = [error_type]
        print(len(to_ignore))
        for (error_path, line_no), error_types in to_ignore.items():
            all_errors = ", ".join(error_types)
            full_path = f"{cwd}/{error_path}"
            try:
                with open(full_path, "r") as f:
                    file_lines = f.readlines()
            except FileNotFoundError:
                continue
            file_lines[int(line_no) - 1] = (
                file_lines[int(line_no) - 1][:-1] + f"  # type: ignore[{all_errors}]\n"
            )
            with open(full_path, "w") as f:
                f.write("".join(file_lines))

        subprocess.run(
            "poetry run ruff format .; poetry run ruff --select I --fix .",
            cwd=cwd,
            shell=True,
            capture_output=True,
            text=True,
        )


if __name__ == "__main__":
    main()

```
3 months ago
William FH 6cd56821dc
[Core] Unify function schema parsing (#23370)
Use pydantic to infer nested schemas and all that fun.
Include bagatur's convenient docstring parser
Include annotation support


Previously we didn't adequately support many typehints in the
bind_tools() method on raw functions (like optionals/unions, nested
types, etc.)
3 months ago
Oguz Vuruskaner 2a2c0d1a94
community[deepinfra]: fix tool call parsing. (#23162)
This PR includes fix for DeepInfra tool call parsing.
3 months ago
maang-h 525109e506
feat: Implement ChatBaichuan asynchronous interface (#23589)
- **Description:** Add interface to `ChatBaichuan` to support
asynchronous requests
    - `_agenerate` method
    - `_astream` method

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
3 months ago
Leonid Ganeline 716a316654
core: docstrings `indexing` (#23785)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
Leonid Ganeline 30fdc2dbe7
core: docstrings `messages` (#23788)
Added missed docstrings. Formatted docstrings to the consistent form.
3 months ago
ccurme 54e730f6e4
fireworks[patch]: read from tool calls attribute (#23820) 3 months ago
Bagatur ebb404527f
anthropic[patch]: Release 0.1.19 (#23783) 3 months ago
Bagatur 6168c846b2
openai[patch]: Release 0.1.14 (#23782) 3 months ago
Bagatur cb9812593f
openai[patch]: expose model request payload (#23287)
![Screenshot 2024-06-21 at 3 12 12
PM](https://github.com/langchain-ai/langchain/assets/22008038/6243a01f-1ef6-4085-9160-2844d9f2b683)
3 months ago
Bagatur ed200bf2c4
anthropic[patch]: expose payload (#23291)
![Screenshot 2024-06-21 at 4 56 02
PM](https://github.com/langchain-ai/langchain/assets/22008038/a2c6224f-3741-4502-9607-1a726a0551c9)
3 months ago
Bagatur 7a3d8e5a99
core[patch]: Release 0.2.11 (#23780) 3 months ago
Bagatur d677dadf5f
core[patch]: mark RemoveMessage beta (#23656) 3 months ago
ccurme 1d54ac93bb
ai21[patch]: release 0.1.7 (#23781) 3 months ago
Asaf Joseph Gardin 320dc31822
partners: AI21 Labs Jamba Streaming Support (#23538)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** Added support for streaming in AI21 Jamba Model
    - **Twitter handle:** https://github.com/AI21Labs


- [x] **Add tests and docs**: If you're adding a new integration, please
include

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
3 months ago
Qingchuan Hao 5cd4083457
community: make bing web search as the only option (#23523)
This PR make bing web search as the option for BingSearchAPIWrapper to
facilitate and simply the user interface on Langchain.
This is a follow-up work of
https://github.com/langchain-ai/langchain/pull/23306.
3 months ago
ccurme 7c1cddf1b7
anthropic[patch]: release 0.1.18 (#23778) 3 months ago
ccurme c9dac59008
anthropic[patch]: fix model name in some integration tests (#23779) 3 months ago
Bagatur 7a6c06cadd
anthropic[patch]: tool output parser fix (#23647) 3 months ago
ccurme 46cbf0e4aa
anthropic[patch]: use core output parsers for structured output (#23776)
Also add to standard tests for structured output.
3 months ago
kiarina dc396835ed
langchain_anthropic: add stop_reason in ChatAnthropic stream result (#23689)
`ChatAnthropic` can get `stop_reason` from the resulting `AIMessage` in
`invoke` and `ainvoke`, but not in `stream` and `astream`.
This is a different behavior from `ChatOpenAI`.
It is possible to get `stop_reason` from `stream` as well, since it is
needed to determine the next action after the LLM call. This would be
easier to handle in situations where only `stop_reason` is needed.

- Issue: NA
- Dependencies: NA
- Twitter handle: https://x.com/kiarina37
3 months ago
maang-h e4e28a6ff5
community[patch]: Fix MiniMaxChat validate_environment error (#23770)
- **Description:** Fix some issues in MiniMaxChat 
  - Fix `minimax_api_host` not in `values` error
- Remove `minimax_group_id` from reading environment variables, the
`minimax_group_id` no longer use in MiniMaxChat
  - Invoke callback prior to yielding token, the issus #16913
3 months ago
SN acc457f645
core[patch]: fix nested sections for mustache templating (#23747)
The prompt template variable detection only worked for singly-nested
sections because we just kept track of whether we were in a section and
then set that to false as soon as we encountered an end block. i.e. the
following:

```
{{#outerSection}}
    {{variableThatShouldntShowUp}}
    {{#nestedSection}}
        {{nestedVal}}
    {{/nestedSection}}
    {{anotherVariableThatShouldntShowUp}}
{{/outerSection}}
```

Would yield `['outerSection', 'anotherVariableThatShouldntShowUp']` as
input_variables (whereas it should just yield `['outerSection']`). This
fixes that by keeping track of the current depth and using a stack.
3 months ago
Eugene Yurtsev 46ff0f7a3c
community[patch]: Update @root_validators to use explicit pre=True or pre=False (#23737) 3 months ago
Igor Drozdov b664dbcc36
feat(community): add support for tool_calls response (#23765)
When `model_kwargs={"tools": tools}` are passed to `ChatLiteLLM`, they
are executed, but the response is not recognized correctly

Let's add `tool_calls` to the `additional_kwargs`

Thank you for contributing to LangChain!

## ChatAnthropic

I used the following example to verify the output of llm with tools:

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_anthropic import ChatAnthropic

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

llm = ChatAnthropic(model="claude-3-sonnet-20240229")
llm_with_tools = llm.bind_tools([GetWeather, GetPopulation])
ai_msg = llm_with_tools.invoke("Which city is hotter today and which is bigger: LA or NY?")
print(ai_msg.tool_calls)
```

I get the following response:

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01UfDA89knrhw3vFV9X47neT'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01NrYVRYae7m7z7tBgyPb3Gd'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_01EPFEpDgzL6vV2dTpD9SVP5'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01B5J6tPJXgwwfhQX9BHP2dt'}]
```

## LiteLLM

Based on https://litellm.vercel.app/docs/completion/function_call

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
import litellm

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

response = litellm.completion(model="claude-3-sonnet-20240229", messages=[{'role': 'user', 'content': prompt}], tools=tools)
print(response.choices[0].message.tool_calls)
```

```python
[ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetWeather'), id='toolu_01HeDWV5vP7BDFfytH5FJsja', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetWeather'), id='toolu_01EiLesUSEr3YK1DaE2jxsQv', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "Los Angeles, CA"}', name='GetPopulation'), id='toolu_01Xz26zvkBDRxEUEWm9pX6xa', type='function'), ChatCompletionMessageToolCall(function=Function(arguments='{"location": "New York, NY"}', name='GetPopulation'), id='toolu_01SDqKnsLjvUXuBsgAZdEEpp', type='function')]
```

## ChatLiteLLM

When I try the following

```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain_community.chat_models import ChatLiteLLM

class GetWeather(BaseModel):
    '''Get the current weather in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

class GetPopulation(BaseModel):
    '''Get the current population in a given location'''

    location: str = Field(..., description="The city and state, e.g. San Francisco, CA")

prompt = "Which city is hotter today and which is bigger: LA or NY?"
tools = [convert_to_openai_tool(GetWeather), convert_to_openai_tool(GetPopulation)]

llm = ChatLiteLLM(model="claude-3-sonnet-20240229", model_kwargs={"tools": tools})
ai_msg = llm.invoke(prompt)
print(ai_msg)
print(ai_msg.tool_calls)
```

```python
content="Okay, let's find out the current weather and populations for Los Angeles and New York City:" response_metadata={'token_usage': Usage(prompt_tokens=329, completion_tokens=193, total_tokens=522), 'model': 'claude-3-sonnet-20240229', 'finish_reason': 'tool_calls'} id='run-748b7a84-84f4-497e-bba1-320bd4823937-0'
[]
```

---

When I apply the changes of this PR, the output is

```json
[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_017D2tGjiaiakB1HadsEFZ4e'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01WrDpJfVqLkPejWzonPCbLW'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'toolu_016UKyYrVAV9Pz99iZGgGU7V'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'toolu_01Sgv1imExFX1oiR1Cw88zKy'}]
```

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Igor Drozdov <idrozdov@gitlab.com>
3 months ago
Eugene Yurtsev 338cef35b4
community[patch]: update @root_validator in utilities namespace (#23768)
Update all utilities to use `pre=True` or `pre=False`

https://github.com/langchain-ai/langchain/issues/22819
3 months ago