Commit Graph

6807 Commits (ee5bd986defc90770a705535ff841aed354f2e5a)
 

Author SHA1 Message Date
Xin Liu 0a7d360ba4
feat: new integration `wasm_chat` (#14787)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Adds `WasmChat` integration. `WasmChat` runs GGUF models locally or via
chat service in lightweight and secure WebAssembly containers. In this
PR, `WasmChatService` is introduced as the first step of the
integration. `WasmChatService` is driven by
[llama-api-server](https://github.com/second-state/llama-utils) and
[WasmEdge Runtime](https://wasmedge.org/).

---------

Signed-off-by: Xin Liu <sam@secondstate.io>
6 months ago
Harrison Chase 51dcb89a72
cleanup getting started (#15450) 6 months ago
Leonid Ganeline 2bbee894bb
fixed a dependency duplicate (#15444)
BaseModel is derived twice. Left only one.
6 months ago
William FH 65afc13b8b
[Improvement] Evals: Add git info (#15446) 6 months ago
Anush 58cc7878e9
refactor: Qdrant async improvements (#14492)
Follow up on https://github.com/langchain-ai/langchain/pull/13048.
This PR intends to simplify the Qdrant async implementation by replacing
the internal GRPC methods with the `QdrantAsyncClient` methods.
This is a backward compatible change with no additional steps required
after merge.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Li-Lun Lin cda68d717c
core[patch]: update LanguageModelInput from List to Sequence (#14405)
Co-authored-by: Erick Friis <erick@langchain.dev>
6 months ago
JuR-0 4dab37741a
Fix Bedrock broad error catching (#14398)
Fixes #14347 

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** Added the traceback of the previous error to keep the
initial error type,
  - **Issue:** #14347 ,
  - **Dependencies:** None,
  - **Tag maintainer:** 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Julien Raffy <julien.raffy@emeria.eu>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
amaleki2 413a56b8f1
adding vectorstore_kwarg attribute to search_similarity function (#14604)
- **Description:** the ability to add all extra parameter of vectorstore
and using them SemanticSimilarityExampleSelector.
  - **Issue:** #14583
  - **Dependencies:** no dependensies
  - **Tag maintainer:** 
  - **Twitter handle:** @AmirMalekiz

---------

Co-authored-by: Amir Maleki <amaleki@fb.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Bob Lin e93be14c11
Improvement: Allow passing parameters to the underlying es_client. Closes: #14403 (#14435)
### Description

In https://github.com/langchain-ai/langchain/issues/14403, the user
mentioned that he hopes not to verify ssl and needs to pass more
parameters

I found that the `Elasticsearch` class [has very many
parameters](98f2af2134/elasticsearch/_sync/client/__init__.py (L131-L191)
):

<img width="1097" alt="Screenshot 2023-12-08 at 4 24 39 PM"
src="https://github.com/langchain-ai/langchain/assets/10000925/f2201554-b41a-4388-a8e8-c14a2d0466d4">

In order to adapt to more situations, I want to add the kwargs parameter
so that users can enter more `Elasticsearch` parameters. Like
[redis](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/redis/base.py#L253),
[tair](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/tair.py#L32),
[myscale](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/myscale.py#L112)
and so on.
6 months ago
codehound42 8aa921d3a4
Support `score_threshold` in SupabaseVectorStore similarity search (#14439)
Description: Add support for setting the `score_threshold` for
similarity search in SupabaseVectoreStore.

This pull request addresses issue #14438

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Antonio Pisani d4a98e4e04
core: update json output parser (#15079)
- **Description:** changed json.py to handle additional cases of partial
json string to be parsed, basically by dropping the last character in
the string until a valid json string is found or the string is empty.
Also added additional test cases.
  
- **Issue:** function parse_partial_json could not parse cases where the
key is present but the value is not.

---------

Co-authored-by: Nuno Campos <nuno@langchain.dev>
6 months ago
YISH eecfa81918
Add the collection_description parameter to Milvus (#14524)
Because Milvus' collection_name doesn't support UFT8 characters in other
languages, I want the `collection_descriotion`.


<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Evgenii Molov b4ec340fb3
Fix failing serpapi response processing for Google Maps API (#14817)
**Description:** Fix for processing for serpapi response for Google Maps
API
**Issue:** Due to the fact corresponding
[api](https://serpapi.com/google-maps-api) returns 'local_results' as
list, and old version requested `res["local_results"].keys()` of the
list. As the result we got exception: ```AttributeError: 'list' object
has no attribute 'keys'```.

Way to reproduce wrong behaviour:
```
    params = {
        "engine": "google_maps",
        "type": "search",
        "google_domain": "google.de",
        "ll": "@51.1917,10.525,14z",
        "hl": "de",
        "gl": "de",
    }
    search = SerpAPIWrapper(params=params)
    results = search.run("cafe")
```

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Ran <rccalman@gmail.com>
6 months ago
YISH da0f750a0b
Milvus allows to store metadata as json field (#14636)
Because Milvus doesn't support nullable fields, but document metadata is
very rich, so it makes more sense to store it as json.


https://github.com/milvus-io/pymilvus/issues/1705#issuecomment-1731112372

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Erick Friis 620168e459
docs: together ai updates (#15435) 6 months ago
Bagatur 93e924ec96
langchain[patch], docs: update agent toolkit imports (#15434) 6 months ago
Ashley Xu 0ce7858529
feat: add Google BigQueryVectorSearch in vectorstore (#14829)
BigQuery vector search lets you use GoogleSQL to do semantic search,
using vector indexes for fast but approximate results, or using brute
force for exact results.

This PR integrates LangChain vectorstore with BigQuery Vector Search.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Vlad Kolesnikov <vladkol@google.com>
6 months ago
JaguarDB 02f59c2035
Use args option in jaguar so it takes more options in similarity search (#15080)
- **Description:** replace score_threshold with args
  - **Issue:** needs a way to pass more options to similarity search
  - **Dependencies:** None
  - **Twitter handle:** @workbot

---------

Co-authored-by: JY <jyjy@jaguardb>
6 months ago
chyroc 37ad6ec248
Refactor: use SecretStr for tongyi chat-model (#15102) 6 months ago
Shaurya Rohatgi e1c2cd7a28
community: Semanticscholar tool to search 200M+ scientific articles (#15151)
- **Description:** Tool now supports querying over 200 million
scientific articles, vastly expanding its reach beyond the 2 million
articles accessible through Arxiv. This update significantly broadens
access to the entire scope of scientific literature.
- **Dependencies:** semantischolar
https://github.com/danielnsilva/semanticscholar
  - **Twitter handle:** @shauryr

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
aqibamir 073e4107cd
Fixed minor type in self_query.ipynb (#15196)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
dudub12 7e6b0056b8
SQLDatabase drop the column names in the result. (#15361)
Fix for the following bug:
https://github.com/langchain-ai/langchain/issues/15360

---------

Co-authored-by: dudu butbul <100126964+dudu-upstream@users.noreply.github.com>
6 months ago
chyroc 07d294b5ec
Fix: fix Bing Search empty result exception, fix #15384 (#15387)
fix https://github.com/langchain-ai/langchain/issues/15384
6 months ago
Bagatur 1678d6ca17
langchain[patch], experimental[patch], docs: update tools imports (#15433) 6 months ago
Bob Lin e57e50b213
Remove unused `_get_python_repl` (#15389)
This part of the code can also be safely cleaned up.
6 months ago
Dariusz Kajtoch 15b6c049d4
core:adds tests for partial_variables (#15427)
**Description:** Added small tests to test partial_variables in
PromptTemplate. It was missing.
6 months ago
suhas-kotaki 73a628de9a
added fix for key error: doc_id (#15428)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Leonid Ganeline 1e6519edc2
docs `Microsoft` platform page update (#15420)
Added two new document_loader references. Improved the format
consistency of the example pages
6 months ago
Leonid Ganeline b8c6ebf647
refactor `utils` (#15432)
The `langchain` [still holds several
artifacts](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.utils)
that belongs to `community`. If they moved then `langchain.utils`
namespace would be removed completely.
- moved `ernie_functions` artifacts to `community`
6 months ago
Bagatur fa5d49f2c1
docs, experimental[patch], langchain[patch], community[patch]: update storage imports (#15429)
ran 
```bash
g grep -l "langchain.vectorstores" | xargs -L 1 sed -i '' "s/langchain\.vectorstores/langchain_community.vectorstores/g"
g grep -l "langchain.document_loaders" | xargs -L 1 sed -i '' "s/langchain\.document_loaders/langchain_community.document_loaders/g"
g grep -l "langchain.chat_loaders" | xargs -L 1 sed -i '' "s/langchain\.chat_loaders/langchain_community.chat_loaders/g"
g grep -l "langchain.document_transformers" | xargs -L 1 sed -i '' "s/langchain\.document_transformers/langchain_community.document_transformers/g"
g grep -l "langchain\.graphs" | xargs -L 1 sed -i '' "s/langchain\.graphs/langchain_community.graphs/g"
g grep -l "langchain\.memory\.chat_message_histories" | xargs -L 1 sed -i '' "s/langchain\.memory\.chat_message_histories/langchain_community.chat_message_histories/g"
gco master libs/langchain/tests/unit_tests/*/test_imports.py
gco master libs/langchain/tests/unit_tests/**/test_public_api.py
```
6 months ago
Harrison Chase a33d92306c
add get prompts method (#15425) 6 months ago
Nuno Campos 6810b4b0bc
Use tz-aware utc datetimes in tracer (#15187)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Bagatur 480626dc99
docs, community[patch], experimental[patch], langchain[patch], cli[pa… (#15412)
…tch]: import models from community

ran
```bash
git grep -l 'from langchain\.chat_models' | xargs -L 1 sed -i '' "s/from\ langchain\.chat_models/from\ langchain_community.chat_models/g"
git grep -l 'from langchain\.llms' | xargs -L 1 sed -i '' "s/from\ langchain\.llms/from\ langchain_community.llms/g"
git grep -l 'from langchain\.embeddings' | xargs -L 1 sed -i '' "s/from\ langchain\.embeddings/from\ langchain_community.embeddings/g"
git checkout master libs/langchain/tests/unit_tests/llms
git checkout master libs/langchain/tests/unit_tests/chat_models
git checkout master libs/langchain/tests/unit_tests/embeddings/test_imports.py
make format
cd libs/langchain; make format
cd ../experimental; make format
cd ../core; make format
```
6 months ago
Nuno Campos 9cbf14dec2
Fetch runnable config from context var inside runnable lambda and runnable generator (#15334)
- easier to write custom logic/loops with automatic tracing
- if you don't want to streaming support write a regular function and
pass to RunnableLambda
- if you do want streaming write a generator and pass it to
RunnableGenerator

```py
import json
from typing import AsyncIterator

from langchain_core.messages import BaseMessage, FunctionMessage, HumanMessage
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables import Runnable, RunnableGenerator, RunnablePassthrough
from langchain_core.tools import BaseTool

from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.chat_models import ChatOpenAI
from langchain.tools.render import format_tool_to_openai_function


def _get_tavily():
    from langchain.tools.tavily_search import TavilySearchResults
    from langchain.utilities.tavily_search import TavilySearchAPIWrapper

    tavily_search = TavilySearchAPIWrapper()
    return TavilySearchResults(api_wrapper=tavily_search)


async def _agent_executor_generator(
    input: AsyncIterator[list[BaseMessage]],
    *,
    max_iterations: int = 10,
    tools: dict[str, BaseTool],
    agent: Runnable[list[BaseMessage], BaseMessage],
    parser: Runnable[BaseMessage, AgentAction | AgentFinish],
) -> AsyncIterator[BaseMessage]:
    messages = [m async for mm in input for m in mm]
    for _ in range(max_iterations):
        next_message = await agent.ainvoke(messages)
        yield next_message
        messages.append(next_message)

        parsed = await parser.ainvoke(next_message)
        if isinstance(parsed, AgentAction):
            result = await tools[parsed.tool].ainvoke(parsed.tool_input)
            next_message = FunctionMessage(name=parsed.tool, content=json.dumps(result))
            yield next_message
            messages.append(next_message)
        elif isinstance(parsed, AgentFinish):
            return


def get_agent_executor(tools: list[BaseTool], system_message: str):
    llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0, streaming=True)
    prompt = ChatPromptTemplate.from_messages(
        [
            ("system", system_message),
            MessagesPlaceholder(variable_name="messages"),
        ]
    )
    llm_with_tools = llm.bind(
        functions=[format_tool_to_openai_function(t) for t in tools]
    )

    agent = {"messages": RunnablePassthrough()} | prompt | llm_with_tools
    parser = OpenAIFunctionsAgentOutputParser()
    executor = RunnableGenerator(_agent_executor_generator)
    return executor.bind(
        tools={tool.name for tool in tools}, agent=agent, parser=parser
    )


agent = get_agent_executor([_get_tavily()], "You are a very nice agent!")


async def main():
    async for message in agent.astream(
        [HumanMessage(content="whats the weather in sf tomorrow?")]
    ):
        print(message)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())
```

results in this trace
https://smith.langchain.com/public/fa17f05d-9724-4d08-8fa1-750f8fcd051b/r
6 months ago
Bagatur 8e0d5813c2
langchain[patch], experimental[patch]: replace langchain.schema imports (#15410)
Import from core instead.

Ran:
```bash
git grep -l 'from langchain.schema\.output_parser' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.output_parser/from\ langchain_core.output_parsers/g"
git grep -l 'from langchain.schema\.messages' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.messages/from\ langchain_core.messages/g"
git grep -l 'from langchain.schema\.document' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.document/from\ langchain_core.documents/g"
git grep -l 'from langchain.schema\.runnable' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.runnable/from\ langchain_core.runnables/g"
git grep -l 'from langchain.schema\.vectorstore' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.vectorstore/from\ langchain_core.vectorstores/g"
git grep -l 'from langchain.schema\.language_model' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.language_model/from\ langchain_core.language_models/g"
git grep -l 'from langchain.schema\.embeddings' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.embeddings/from\ langchain_core.embeddings/g"
git grep -l 'from langchain.schema\.storage' | xargs -L 1 sed -i '' "s/from\ langchain\.schema\.storage/from\ langchain_core.stores/g"
git checkout master libs/langchain/tests/unit_tests/schema/
make format
cd libs/experimental
make format
cd ../langchain
make format
```
6 months ago
Bagatur a3d47b4f19
docs: fix model i/o index links (#15421) 6 months ago
Bagatur 5a43e0e885
docs: fix agents index links (#15419) 6 months ago
Ankush Gola f50dba12ff
Calculate trace_id and dotted_order client side (#15351)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
6 months ago
Erick Friis a8f6f33cd9
infra: remove path filter on check_diffs (#15418)
CI should run on https://github.com/langchain-ai/langchain/pull/15412

But github only checks first 300 files:
https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#git-diff-comparisons

> Diffs are limited to 300 files. If there are files changed that aren't
matched in the first 300 files returned by the filter, the workflow will
not run. You may need to create more specific filters so that the
workflow will run automatically.
6 months ago
Bob Lin 4488234d64
Update `gpt4all.mdx` doc (#15392)
The [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository has
been archived and the model name and usage need to be changed.
6 months ago
Mohammad Mohtashim b6c57d38fa
Langchain_community: Small Fix when loading facebook messages (#15358)
- **Description:** SingleFileFacebookMessengerChatLoader did not handle
the case for when messages had stickers and/or photos so fixed that.
  - **Issue:** #15356

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Mateusz Szewczyk cbfaccc424
WatsonxLLM updates/enhancements (#14598)
- **Description:** updates/enhancements to IBM
[watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider
(prompt tuned models and prompt templates deployments support)
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : @hwchase17 , @eyurtsev , @baskaryan 
  - **Twitter handle:** details in comment below.

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Manjunath Janardhan 7a0feba9f7
GITLAB_URL should take default https://gitlab.com instead of error (#14638)
The fix #14221 has broken default gitlab url which is forcing the users
to specify GITLAB_URL for default one. With this fix if GITLAB_URL is
not set, the default gitlab url will be taken.

  - **Description:** Add the GITHUB URL instead of None
  - **Issue:** the issue #14221 has broken the default github URL 
  - **Dependencies:** None
  - **Tag maintainer:** @hwchase17 
  - **Twitter handle:** manjunath_shiva
6 months ago
David dcf047c48f
add api_base to _client_params (community version of #14393) (#14644)
- **Description:** This PR adds `api_base` to `_client_params` in the
`chat_model` of LiteLLM to ensure it's included in API calls.
Previously, `api_base` was set on the client but was not included in the
parameters passed to the completion function. This change ensures that
`api_base` is correctly passed to all API calls.
  - **Issue:** #14338
  - **Tag maintainer:** @hwchase17 @agola11
  - **Twitter handle:** @LMS_David_RS
6 months ago
Lucca Zenóbio 3bd0a15506
Fix for openai multi tools input format. (#14653)
Sometimes, the tool_schema is like:
  
` {'action_name': 'search_items', 'action': {'term': 'pizza'}}`

sometimes, specially with gpt3.5 it comes like:

`{'action_name': 'search_items', 'term': 'pizza'}`

and it fails.

This PR is a way to make it work in both scenarios.

issues releated: #6624 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: Lucca Zenobio <lucca.zenobio@ifood.com.br>
6 months ago
xuxiang dd1d818a82
Fixing the Issue with DashScopeEmbeddings Handling More than 25 Rows of Data (#14662)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
 
This change addresses the issue where DashScopeEmbeddingAPI limits
requests to 25 lines of data, and DashScopeEmbeddings did not handle
cases with more than 25 lines, leading to errors. I have implemented a
fix to manage data exceeding this limit efficiently.

---------

Co-authored-by: xuxiang <xuxiang@aliyun.com>
6 months ago
Thomas B 9d8468a576
Enhancement on feature/yaml output parser (#14674)
Adding to my previously, already merged PR I made some further
improvements:

* Added documentation to the existing Pydantic Parser notebook, with an
example using LCEL and `with_retry()` on `OutputParserException`.
* Added an additional output example to the prompt
* More lenient parser in terms of LLM output format
* Amended unit test

FYI @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Zeeland ff10f30149
fix: syntax error in function docs (#14641)
fix syntax error in function docs
6 months ago
Paresh Chiramel 9be08a1956
Update _retrieve_ref inside json_schema.py to include an isdigit() check (#14745)
- **Description:** Update _retrieve_ref inside json_schema.py to include
an isdigit() check
- **Issue:** This library is used inside dereference_refs inside
langchain_community.agent_toolkits.openapi.spec. When I read in a yaml
file which has references for "400", "401" etc; the line "out =
out[component]" causes a KeyError. The isdigit() check ensures that if
it is an integer like "400" or "401"; it converts it into integer before
using it as a key to prevent the error.
  - **Dependencies:** No dependencies
  - **Tag maintainer:** @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
6 months ago
Joshua Sundance Bailey cfd27b1786
python-lint (#14689)
# Description: _python-lint_

This agent writes Python code that is formatted and linted using
`black`, `ruff`, and `mypy`, but does not execute the code. It writes
the code to a temporary file and then runs the linters. Once these
checks pass, the code is returned.

# Dependencies

- black
- ruff
- mypy

# Demo

The functionality can be seen here:
https://huggingface.co/spaces/joshuasundance/langchain-streamlit-demo
6 months ago