Commit Graph

2663 Commits (36e432672a2d1abd5c8793bd03a6b4848c7a9a7c)

Author SHA1 Message Date
kavinraj A S ab6b41937a
Fixed a typo in smart_llm prompt (#13052)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
jeffpezzone 7c2ef06136
Adds "NIN" metadata filter for pgvector to all checking for set absence (#14205)
This PR adds support for metadata filters of the form:

`{"filter": {"key": { "NIN" : ["list", "of", "values"]}}}`

"IN" is already supported, so this is a quick & related update to add
"NIN"
10 months ago
lif 20d2b4a6ba
feat: Increased compatibility with new and old versions for dalle (#14222)
- **Description:** Increased compatibility with all versions openai for
dalle,

This pr add support for openai version from 0 ~ 1.3.
10 months ago
Wang Wei 7205bfdd00
feat: 1. Add system parameters, 2. Align with the QianfanChatEndpoint for function calling (#14275)
- **Description:** 
1. Add system parameters to the ERNIE LLM API to set the role of the
LLM.
2. Add support for the ERNIE-Bot-turbo-AI model according from the
document https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Alp0kdm0n.
3. For the function call of ErnieBotChat, align with the
QianfanChatEndpoint.

With this PR, the `QianfanChatEndpoint()` can use the `function calling`
ability with `create_ernie_fn_chain()`. The example is as the following:

```
from langchain.prompts import ChatPromptTemplate
import json
from langchain.prompts.chat import (
    ChatPromptTemplate,
)

from langchain.chat_models import QianfanChatEndpoint
from langchain.chains.ernie_functions import (
    create_ernie_fn_chain,
)

def get_current_news(location: str) -> str:
    """Get the current news based on the location.'

    Args:
        location (str): The location to query.
    
    Returs:
        str: Current news based on the location.
    """

    news_info = {
        "location": location,
        "news": [
            "I have a Book.",
            "It's a nice day, today."
        ]
    }

    return json.dumps(news_info)

def get_current_weather(location: str, unit: str="celsius") -> str:
    """Get the current weather in a given location

    Args:
        location (str): location of the weather.
        unit (str): unit of the tempuature.
    
    Returns:
        str: weather in the given location.
    """

    weather_info = {
        "location": location,
        "temperature": "27",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

template = ChatPromptTemplate.from_messages([
    ("user", "{user_input}"),
])

chat = QianfanChatEndpoint(model="ERNIE-Bot-4")
chain = create_ernie_fn_chain([get_current_weather, get_current_news], chat, template, verbose=True)
res = chain.run("北京今天的新闻是什么?")
print(res)
```

The result of the above code:
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: 北京今天的新闻是什么?
> Finished chain.
{'name': 'get_current_news', 'arguments': {'location': '北京'}}
```

For the `ErnieBotChat`, now can use the `system` parameter to set the
role of the LLM.

```
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
from langchain.chat_models import ErnieBotChat

llm = ErnieBotChat(model_name="ERNIE-Bot-turbo-AI", system="你是一个能力很强的机器人,你的名字叫 小叮当。无论问你什么问题,你都可以给出答案。")
prompt = ChatPromptTemplate.from_messages(
    [
        ("human", "{query}"),
    ]
)
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
res = chain.run(query="你是谁?")
print(res)
```

The result of the above code:

```
> Entering new LLMChain chain...
Prompt after formatting:
Human: 你是谁?
> Finished chain.
我是小叮当,一个智能机器人。我可以为你提供各种服务,包括回答问题、提供信息、进行计算等。如果你需要任何帮助,请随时告诉我,我会尽力为你提供最好的服务。
```
10 months ago
Leonid Kuligin fd5be55a7b
added get_num_tokens to GooglePalm (#14282)
added get_num_tokens to GooglePalm + a little bit of refactoring
10 months ago
Massimiliano Pronesti c215a4c9ec
feat(embeddings): text-embeddings-inference (#14288)
- **Description:** Added a notebook to illustrate how to use
`text-embeddings-inference` from huggingface. As
`HuggingFaceHubEmbeddings` was using a deprecated client, I made the
most of this PR updating that too.

- **Issue:** #13286 

- **Dependencies**: None

- **Tag maintainer:** @baskaryan
10 months ago
Tim Van Wassenhove 85b88c33f3
Fixes issue-14295: Correctly pass along the kwargs (#14296)
- **Description:** Update code to correctly pass the kwargs 
  - **Issue:** #14295 
  - **Dependencies:**  - 
  - **Tag maintainer:** 

<--
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

#issue-14295
10 months ago
Jarkko Lagus 667ad6a5de
Add support for CORS options for AzureSearch (#14305)
- **Description:** Add support for setting the CORS options when using
AzureSearch indexes
10 months ago
Karim Assi 9401539e43
Allow not enforcing function usage when a single function is passed to openai function executable (#14308)
- **Description:** allows not enforcing function usage when a single
function is passed to an openAI function executable (or corresponding
legacy chain). This is a desired feature in the case where the model
does not have enough information to call a function, and needs to get
back to the user.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** N/A
10 months ago
Ran d22c13ec48
Mask API key for Minimax LLM (#14309)
- **Description:** Added masking for the API key for Minimax LLM + tests
inspired by https://github.com/langchain-ai/langchain/pull/12418.
- **Issue:** the issue # fixes
https://github.com/langchain-ai/langchain/issues/12165
- **Dependencies:** this fix is dependent on Minimax instantiation fix
which is introduced in
https://github.com/langchain-ai/langchain/pull/13439, so merge this one
after.
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Eugene Yurtsev a74c03da3c
Add metadata to blob (#14162)
Add metadata to the blob object. This makes it easier
to make a pipeline that properly propagates metadata information
from raw content to the derived content.
10 months ago
Lance Martin 66848871fc
Multi-modal RAG template (#14186)
* OpenCLIP embeddings
* GPT-4V

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
James Braza 3b75d37cee
Adding `BaseChatMessageHistory.__str__` (#14311)
Adding __str__ to base chat message history to make it easier to debug
10 months ago
James Braza 8b0060184d
Fixing empty input variable crashing `PromptTemplate` validations (#14314)
- Fixes `input_variables=[""]` crashing validations with a template
`"{}"`
- Uses `__cause__` for proper `Exception` chaining in
`check_valid_template`
10 months ago
Bagatur 6607cc6eab
experimental[patch]: Release 0.0.44 (#14310) 10 months ago
Eugene Yurtsev 80637727ea
hide api key: arcee (#14304)
Hide API key for Arcee

---------

Co-authored-by: raphael <raph.nunes95@gmail.com>
10 months ago
Bagatur b2e756c0a8
langchain[patch]: Release 0.0.346 (#14307) 10 months ago
Bagatur 4a5a13aab3
core[patch]: Release 0.0.10 (#14303) 10 months ago
Eun Hye Kim f758c8adc4
Fix #11737 issue (extra_tools option of create_pandas_dataframe_agent is not working) (#13203)
- **Description:** Fix #11737 issue (extra_tools option of
create_pandas_dataframe_agent is not working),
  - **Issue:** #11737 ,
  - **Dependencies:** no,
- **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17 I needed this
method at work, so I modified it myself and used it. There is a similar
issue(#11737) and PR(#13018) of @PyroGenesis, so I combined my code at
the original PR.
You may be busy, but it would be great help for me if you checked. Thank
you.
  - **Twitter handle:** @lunara_x 

If you need an .ipynb example about this, please tag me. 
I will share what I am working on after removing any work-related
content.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Sean Bearden 77a15fa988
Added ability to pass arguments to the Playwright browser (#13146)
- **Description:** Enhanced `create_sync_playwright_browser` and
`create_async_playwright_browser` functions to accept a list of
arguments. These arguments are now forwarded to
`browser.chromium.launch()` for customizable browser instantiation.
  - **Issue:** #13143
  - **Dependencies:** None
  - **Tag maintainer:** @eyurtsev,
  - **Twitter handle:** Dr_Bearden

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Joan Fontanals dcccf8fa66
adapt Jina Embeddings to new Jina AI Embedding API (#13658)
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Embedding platform
- **Twitter handle:** https://twitter.com/JinaAI_

---------

Co-authored-by: Joan Fontanals Martinez <joan.fontanals.martinez@jina.ai>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
guillaumedelande ea0afd07ca
Update azuresearch.py following recent change from azure-search-documents library (#13472)
- **Description:** 

Reference library azure-search-documents has been adapted in version
11.4.0:

1. Notebook explaining Azure AI Search updated with most recent info
2. HnswVectorSearchAlgorithmConfiguration --> HnswAlgorithmConfiguration
3. PrioritizedFields(prioritized_content_fields) -->
SemanticPrioritizedFields(content_fields)
4. SemanticSettings --> SemanticSearch
5. VectorSearch(algorithm_configurations) -->
VectorSearch(configurations)

--> Changes now reflected on Langchain: default vector search config
from langchain is now compatible with officially released library from
Azure.

  - **Issue:**
Issue creating a new index (due to wrong class used for default vector
search configuration) if using latest version of azure-search-documents
with current langchain version
  - **Dependencies:** azure-search-documents>=11.4.0,
  - **Tag maintainer:** ,

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
price-deshaw 5cb3393e20
update OpenAI function agents' llm validation (#13538)
- **Description:** This PR modifies the LLM validation in OpenAI
function agents to check whether the LLM supports OpenAI functions based
on a property (`supports_oia_functions`) instead of whether the LLM
passed to the agent `isinstance` of `ChatOpenAI`. This allows classes
that extend `BaseChatModel` to be passed to these agents as long as
they've been integrated with the OpenAI APIs and have this property set,
even if they don't extend `ChatOpenAI`.
  - **Issue:** N/A
  - **Dependencies:** none
10 months ago
Max Weng 74c7b799ef
migrate openai audio api (#13557)
for issue https://github.com/langchain-ai/langchain/issues/13162
migrate openai audio api, as [openai v1.0.0 Migration
Guide](https://github.com/openai/openai-python/discussions/742)

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Double Max <max@ground-map.com>
10 months ago
Arnaud Gelas abbba6c7d8
openapi/planner.py: Deal with json in markdown output cases (#13576)
- **Description:** In openapi/planner deal with json in markdown output
cases
- **Issue:** In some cases LLMs could return json in markdown which
can't be loaded.
  - **Dependencies:**
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:**

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Harrison Chase 8eab4d95c0
Harrison/delegate from template (#14266)
Co-authored-by: M.R. Sopacua <144725145+msopacua@users.noreply.github.com>
10 months ago
Nolan b49104c2c9
Add missing doc key to metadata field in AzureSearch Vectorstore (#13328)
- **Description:** Adds doc key to metadata field when adding document
to Azure Search.
  - **Issue:** -,
  - **Dependencies:** -,
  - **Tag maintainer:** @eyurtsev,
  - **Twitter handle:** @finnless

Right now the document key with the name FIELDS_ID is not included in
the FIELDS_METADATA field, and therefore is not included in the Document
returned from a query. This is really annoying if you want to be able to
modify that item in the vectorstore.

Other's thoughts on this are welcome.
10 months ago
Jon Watte e042e5df35
fix: call _on_llm_error() (#13581)
Description: There's a copy-paste typo where on_llm_error() calls
_on_chain_error() instead of _on_llm_error().
Issue: #13580 
Dependencies: None
Tag maintainer: @hwchase17 
Twitter handle: @jwatte

"Run `make format`, `make lint` and `make test` to check this locally."
The test scripts don't work in a plain Ubuntu LTS 20.04 system.
It looks like the dev container pulling is stuck. Or maybe the internet
is just ornery today.

---------

Co-authored-by: jwatte <jwatte@observeinc.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Hamza Ahmed fcc8e5e839
Update geodataframe.py (#13573)
here it is validating shapely.geometry.point.Point: if not
isinstance(data_frame[page_content_column].iloc[0], gpd.GeoSeries):
raise ValueError(
f"Expected data_frame[{page_content_column}] to be a GeoSeries" you need
it to validate the geoSeries and not the shapely.geometry.point.Point

if not isinstance(data_frame[page_content_column], gpd.GeoSeries):
            raise ValueError(
f"Expected data_frame[{page_content_column}] to be a GeoSeries"

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
Harrison Chase 2213fc9711
Harrison/bookend ai (#14258)
Co-authored-by: stvhu-bookend <142813359+stvhu-bookend@users.noreply.github.com>
10 months ago
cxumol 0d47d15a9f
add(feat): Text Embeddings by Cloudflare Workers AI (#14220)
Add [Text Embeddings by Cloudflare Workers
AI](https://developers.cloudflare.com/workers-ai/models/text-embeddings/).
It's a new integration.
Trying to align it with its langchain-js version counterpart
[here](https://api.js.langchain.com/classes/embeddings_cloudflare_workersai.CloudflareWorkersAIEmbeddings.html).
- Dependencies: N/A
- Done `make format` `make lint` `make spell_check` `make
integration_tests` and all my changes was passed
10 months ago
Harrison Chase c51001f01e
fix comet tracer (#14259) 10 months ago
Harrison Chase 4fb72ff76f
fake consistent embeddings cleanup (#14256)
delete code that could never be reached
10 months ago
Michael Landis e26906c1dc
feat: implement max marginal relevance for momento vector index (#13619)
**Description**

Implements `max_marginal_relevance_search` and
`max_marginal_relevance_search_by_vector` for the Momento Vector Index
vectorstore.

Additionally bumps the `momento` dependency in the lock file and adds
logging to the implementation.

**Dependencies**

 updates `momento` dependency in lock file

**Tag maintainer**

@baskaryan 

**Twitter handle**

Please tag @momentohq for Momento Vector Index and @mloml for the
contribution 🙇

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
deedy5 ee9abb6722
Bugfix duckduckgo_search news search (#13670)
- **Description:** 
Bugfix duckduckgo_search news search
  - **Issue:** 
https://github.com/langchain-ai/langchain/issues/13648
  - **Dependencies:** 
None
  - **Tag maintainer:** 
@baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Aliaksandr Kuzmik 676a077c4e
Add CometTracer (#13661)
Hi! I'm Alex, Python SDK Team Lead from
[Comet](https://www.comet.com/site/).

This PR contains our new integration between langchain and Comet -
`CometTracer` class which uses new `comet_llm` python package for
submitting data to Comet.

No additional dependencies for the langchain package are required
directly, but if the user wants to use `CometTracer`, `comet-llm>=2.0.0`
should be installed. Otherwise an exception will be raised from
`CometTracer.__init__`.

A test for the feature is included.

There is also an already existing callback (and .ipynb file with
example) which ideally should be deprecated in favor of a new tracer. I
wasn't sure how exactly you'd prefer to do it. For example we could open
a separate PR for that.

I'm open to your ideas :)
10 months ago
Harrison Chase 921c4b5597
Harrison/searchapi (#14252)
Co-authored-by: SebastjanPrachovskij <86522260+SebastjanPrachovskij@users.noreply.github.com>
10 months ago
Colin Ulin 9f9cb71d26
Embaas - added backoff retries for network requests (#13679)
Running a large number of requests to Embaas' servers (or any server)
can result in intermittent network failures (both from local and
external network/service issues). This PR implements exponential backoff
retries to help mitigate this issue.
10 months ago
Kastan Day 65faba91ad
langchain[patch]: Adding new Github functions for reading pull requests (#9027)
The Github utilities are fantastic, so I'm adding support for deeper
interaction with pull requests. Agents should read "regular" comments
and review comments, and the content of PR files (with summarization or
`ctags` abbreviations).

Progress:
- [x] Add functions to read pull requests and the full content of
modified files.
- [x] Function to use Github's built in code / issues search.

Out of scope:
- Smarter summarization of file contents of large pull requests (`tree`
output, or ctags).
- Smarter functions to checkout PRs and edit the files incrementally
before bulk committing all changes.
- Docs example for creating two agents:
- One watches issues: For every new issue, open a PR with your best
attempt at fixing it.
- The other watches PRs: For every new PR && every new comment on a PR,
check the status and try to finish the job.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Hynek Kydlíček aa8ae31e5b
core[patch]: add response kwarg to on_llm_error
# Dependencies
None

# Twitter handle
@HKydlicek

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Jacob Lee a26c4a0930
Allow base_store to be used directly with MultiVectorRetriever (#14202)
Allow users to pass a generic `BaseStore[str, bytes]` to
MultiVectorRetriever, removing the need to use the `create_kv_docstore`
method. This encoding will now happen internally.

@rlancemartin @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
10 months ago
Vincent Brouwers 67662564f3
langchain[patch]: Fix `config` arg detection for wrapped lambdarunnable (#14230)
**Description:**
When a RunnableLambda only receives a synchronous callback, this
callback is wrapped into an async one since #13408. However, this
wrapping with `(*args, **kwargs)` causes the `accepts_config` check at
[/libs/core/langchain_core/runnables/config.py#L342](ee94ef55ee/libs/core/langchain_core/runnables/config.py (L342))
to fail, as this checks for the presence of a "config" argument in the
method signature.

Adding a `functools.wraps` around it, resolves it.
10 months ago
Jacob Lee de86b84a70
Prefer byte store interface for Upstash BaseStore to match other Redis (#14201)
If we are not going to make the existing Docstore class also implement
`BaseStore[str, Document]`, IMO all base store implementations should
always be `[str, bytes]` so that they are more interchangeable.

CC @rlancemartin @eyurtsev
10 months ago
Harrison Chase 411aa9a41e
Harrison/nasa tool (#14245)
Co-authored-by: Jacob Matias <88005863+matiasjacob25@users.noreply.github.com>
Co-authored-by: Karam Daid <karam.daid@mail.utoronto.ca>
Co-authored-by: Jumana <jumana.fanous@mail.utoronto.ca>
Co-authored-by: KaramDaid <38271127+KaramDaid@users.noreply.github.com>
Co-authored-by: Anna Chester <74325334+CodeMakesMeSmile@users.noreply.github.com>
Co-authored-by: Jumana <144748640+jfanous@users.noreply.github.com>
10 months ago
nceccarelli 5fea63327b
Support Azure gov cloud in Azure Cognitive Search retriever (#13695)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** The existing version hardcoded search.windows.net in
the base url. This is not compatible with the gov cloud. I am allowing
the user to override the default for gov cloud support.,
  - **Issue:** N/A, did not write up in an issue,
  - **Dependencies:** None

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Nicholas Ceccarelli <nceccarelli2@moog.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
ealt e09b876863
Fixes error loading Obsidian templates (#13888)
- **Description:** Obsidian templates can include
[variables](https://help.obsidian.md/Plugins/Templates#Template+variables)
using double curly braces. `ObsidianLoader` uses PyYaml to parse the
frontmatter of documents. This parsing throws an error when encountering
variables' curly braces. This is avoided by temporarily substituting
safe strings before parsing.
  - **Issue:** #13887
  - **Tag maintainer:** @hwchase17
10 months ago
Nithish Raghunandanan eecfa3f9e5
Add Couchbase document loader (#13979)
**Description:** 
Adds the document loader for [Couchbase](http://couchbase.com/), a
distributed NoSQL database.
**Dependencies:** 
Added the Couchbase SDK as an optional dependency.
**Twitter handle:** nithishr

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Muntaqa Mahmood 25f72944a0
Add: Steam API tool (#14008)
- **Description:** Our PR is an integration of a Steam API Tool that
makes recommendations on steam games based on user's Steam profile and
provides information on games based on user provided queries.
- **Issue:** the issue # our PR implements:
https://github.com/langchain-ai/langchain/issues/12120
- **Dependencies:** python-steam-api library, steamspypi library and
decouple library
  - **Tag maintainer:** @baskaryan, @hwchase17 
  - **Twitter handle:** N/A

Hello langchain Maintainers,

We are a team of 4 University of Toronto students contributing to
langchain as part of our course [CSCD01 (link to course
page)](https://cscd01.com/work/open-source-project). We hope our changes
help the community. We have run make format, make lint and make test
locally before submitting the PR. To our knowledge, our changes do not
introduce any new errors.

Our PR integrates the python-steam-api, steamspypi and decouple
packages. We have added integration tests to test our python API
integration into langchain and an example notebook is also provided.

Our amazing team that contributed to this PR: @JohnY2002, @shenceyang,
@andrewqian2001 and @muntaqamahmood

Thank you in advance to all the maintainers for reviewing our PR!

---------

Co-authored-by: Shence <ysc1412799032@163.com>
Co-authored-by: JohnY2002 <johnyuan0526@gmail.com>
Co-authored-by: Andrew Qian <andrewqian2001@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: JohnY <94477598+JohnY2002@users.noreply.github.com>
10 months ago
Bob Lin cd2028288e
Add openai v2 adapter (#14063)
### Description

Starting from [openai version
1.0.0](17ac677995 (module-level-client)),
the camel case form of `openai.ChatCompletion` is no longer supported
and has been changed to lowercase `openai.chat.completions`. In
addition, the returned object only accepts attribute access instead of
index access:

```python
import openai

# optional; defaults to `os.environ['OPENAI_API_KEY']`
openai.api_key = '...'

# all client options can be configured just like the `OpenAI` instantiation counterpart
openai.base_url = "https://..."
openai.default_headers = {"x-foo": "true"}

completion = openai.chat.completions.create(
    model="gpt-4",
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.choices[0].message.content)
```

So I implemented a compatible adapter that supports both attribute
access and index access:

```python
In [1]: from langchain.adapters import openai as lc_openai
   ...: messages = [{"role": "user", "content": "hi"}]

In [2]: result = lc_openai.chat.completions.create(
   ...:     messages=messages, model="gpt-3.5-turbo", temperature=0
   ...: )

In [3]: result.choices[0].message
Out[3]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [4]: result["choices"][0]["message"]
Out[4]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [5]: result = await lc_openai.chat.completions.acreate(
   ...:     messages=messages, model="gpt-3.5-turbo", temperature=0
   ...: )

In [6]: result.choices[0].message
Out[6]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [7]: result["choices"][0]["message"]
Out[7]: {'role': 'assistant', 'content': 'Hello! How can I assist you today?'}

In [8]: for rs in lc_openai.chat.completions.create(
    ...:     messages=messages, model="gpt-3.5-turbo", temperature=0, stream=True
    ...: ):
    ...:     print(rs.choices[0].delta)
    ...:     print(rs["choices"][0]["delta"])
    ...:
{'role': 'assistant', 'content': ''}
{'role': 'assistant', 'content': ''}
{'content': 'Hello'}
{'content': 'Hello'}
{'content': '!'}
{'content': '!'}

In [20]: async for rs in await lc_openai.chat.completions.acreate(
    ...:     messages=messages, model="gpt-3.5-turbo", temperature=0, stream=True
    ...: ):
    ...:     print(rs.choices[0].delta)
    ...:     print(rs["choices"][0]["delta"])
    ...:
{'role': 'assistant', 'content': ''}
{'role': 'assistant', 'content': ''}
{'content': 'Hello'}
{'content': 'Hello'}
{'content': '!'}
{'content': '!'}
...
```

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
10 months ago
billytrend-cohere 0f02081392
Add input_type override (#14068)
Add option to override input_type for cohere's v3 embeddings models

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Dmitrii Rashchenko aaabc1574f
Support of custom hugging face inference endpoints url (#14125)
- **Description:** to support not only publicly available Hugging Face
endpoints, but also protected ones (created with "Inference Endpoints"
Hugging Face feature), I have added ability to specify custom api_url.
But if not specified, default behaviour won't change
  - **Issue:** #9181,
  - **Dependencies:** no extra dependencies
10 months ago
Harrison Chase e32185193e
Harrison/embass (#14242)
Co-authored-by: Julius Lipp <lipp.julius@gmail.com>
10 months ago
umair mehmood 8504ec56e4
fixed: ModuleNotFoundError: No module named 'clarifai.auth' (#14215)
Updated the clarifai imports 

fixed: #14175 

@efriis 
@baskaryan
10 months ago
Hieu Lam ca8a022cd9
Fixed OpenAIFunctionsAgent not returning when receiving AgentFinish (#14236)
**Description:** The way the condition is checked in the
`return_stopped_response` function of `OpenAIAgent` may not be correct,
when the value returned is `AgentFinish` from the tools it does not work
properly.


Thanks for review, @baskaryan, @eyurtsev, @hwchase17.
10 months ago
Unai Garay Maestre 6826feea14
Adds `llm_chain_kwargs` to `BaseRetrievalQA.from_llm` (#14224)
- **Description:** Adds `llm_chain_kwargs` to `BaseRetrievalQA.from_llm`
so these can be passed to the LLM at runtime,
- **Issue:** https://github.com/langchain-ai/langchain/issues/14216,

---------

Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>
10 months ago
James Braza 6ce5dab38c
Clarifying descriptions in `GuardrailsOutputParser` (#14228)
Upstreaming knowledge from
https://github.com/guardrails-ai/guardrails/discussions/473 to LangChain
10 months ago
geret1 50aee687c6
langchain[patch]: Cerebrium model_api_request deprecation (#12704)
- **Description:** As part of my conversation with Cerebrium team,
`model_api_request` will be no longer available in cerebrium lib so it
needs to be replaced.
  - **Issue:** #12705 12705,
  - **Dependencies:** Cerebrium team (agreed)
  - **Tag maintainer:** @eyurtsev 
  - **Twitter handle:** No official Twitter account sorry :D

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
William FH 246dc4f9cc
langchain[patch]: Pass kwargs to chat fireworks (#14183)
Otherwise `.bind()` isn't really any good
10 months ago
Kaiboon Ee e961c57fd2
langchain[patch]: Mask API key for Arcee LLM (#14193)
- **Description:** Mask API key for Arcee LLM and its associated unit
tests
  - **Issue:** https://github.com/langchain-ai/langchain/issues/12165
  - **Dependencies:** N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** `eekaiboon`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Daniyar Supiyev 092f302c0f
langchain[patch]: Asynchronous human-in-the-loop callback (#14195)
**Description:** Adding a possibility to use asynchronous callback
handler in human-in-the-loop validation tool. Very useful, for example,
if you want to implement a validation over Telegram bot.
**Issue:** -
**Dependencies:** -

---------

Co-authored-by: Daniyar_Supiyev <daniyar_supiyev@epam.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Mark Cusack 16c83f786c
Adds the Yellowbrick Data Warehouse as a supported vector store (#13820)
- **Description** An integration to allow the Yellowbrick Data Warehouse
to function as a vector store

---------

Co-authored-by: markcusack <markcusack@markcusacksmac.lan>
Co-authored-by: markcusack <markcusack@Mark-Cusack-sMac.local>
10 months ago
Hendrik Hogertz e6862e6e7d
Fix Azure Openai function calling in streaming mode (#13768)
- **Description**: This PR addresses an issue with the OpenAI API
streaming response, where initially the key (arguments) is provided but
the value is None. Subsequently, it updates with {"arguments": "{\n"},
leading to a type inconsistency that causes an exception. The specific
error encountered is ValueError: additional_kwargs["arguments"] already
exists in this message, but with a different type. This change aims to
resolve this inconsistency and ensure smooth API interactions.
- **Issue**: None.
- **Dependencies**: None.
- **Tag maintainer**: @eyurtsev

This is an updated version of #13229 based on the refactored code.
Credit goes to @superken01.

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Nicolò Boschi e204657b3c
AstraDB VectorStore: implement pre_delete_collection (#13780)
- **Description:** some vector stores have a flag for try deleting the
collection before creating it (such as ´vectorpg´). This is a useful
flag when prototyping indexing pipelines and also for integration tests.
Added the bool flag `pre_delete_collection ` to the constructor (default
False)
  - **Tag maintainer:** @hemidactylus 
  - **Twitter handle:** nicoloboschi

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Chelsea E. Manning 2780d2d4dd
Extend OpenAIEmbeddings class to support non-`tiktoken` based embeddings (#13884)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** This extends `OpenAIEmbeddings` to add support for
non-`tiktoken` based embeddings, specifically for use with the new
`text-generation-webui` API (`--extensions openai`) which does not
support `tiktoken` encodings, but rather strings
  - **Issue:** Not found,
- **Dependencies:** HuggingFace `transformers.AutoTokenizer` is new
dependency for running the model without `tiktoken`
- **Tag maintainer:** @baskaryan based on last commit for
`langchain-core` refactor
  - **Twitter handle:** @xychelsea

Modified the tokenization process to be model-agnostic, allowing for
both OpenAI and non-OpenAI model tokenizations, by setting the new
default `bool` flag `tiktoken_enabled` to `False`. This requeires
HuggingFace’s AutoTokenizer and handling tokenization for models
requiring different preprocessing steps to generate a chunked string
request rather than a list of integers.

Updated the embeddings generation process to accommodate non-OpenAI
models. This includes converting tokenized text into embeddings using
OpenAI’s and Hugging Face’s model architectures.
 -->
10 months ago
Changgeng Zhao 9b59bde93d
Update Hologres vector store: use hologres-vector (#13767)
Hi,
I made some code changes on the Hologres vector store to improve the
data insertion performance.
Also, this version of the code uses `hologres-vector` library. This
library is more convenient for us to update, and more efficient in
performance.
The code has passed the format/lint/spell check. I have run the unit
test for Hologres connecting to my own database.
Please check this PR again and tell me if anything needs to change.

Best,
Changgeng,
Developer @ Alibaba Cloud

Co-authored-by: Changgeng Zhao <zhaochanggeng.zcg@alibaba-inc.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Nicolò Boschi 0de7cf898d
Ensure AstraDB integration tests clean up the environment (#13774)
- **Description:** currently astra_db integration tests might leave
orphan collections
  - **Tag maintainer:** @hemidactylus 
  - **Twitter handle:** nicoloboschi
10 months ago
Chad Norvell 8a0951d934
Fix Mathpix PDF loader integration (#13949)
- **Description:** Fixes the Mathpix PDF loader API integration.
Specifically, ensures that Mathpix auth headers are provided for every
request, and ensures that we recognize all errors that can occur during
a request. Also, the option to provide API keys as kwargs never actually
worked before, but now that's fixed too.
  - **Issue:** #11249
  - **Dependencies:** None
10 months ago
gzyJoy 32d4bb4590
Added Slacktoolkit (#14012)
- **Description:** 
This PR introduces the Slack toolkit to LangChain, which allows users to
read and write to Slack using the Slack API. Specifically, we've added
the following tools.
1. get_channel: Provides a summary of all the channels in a workspace.
2. get_message: Gets the message history of a channel.
3. send_message: Sends a message to a channel.
4. schedule_message: Sends a message to a channel at a specific time and
date.

- **Issue:** This pull request addresses [Add Slack Toolkit
#11747](https://github.com/langchain-ai/langchain/issues/11747)
  - **Dependencies:** package`slack_sdk`
Note: For this toolkit to function you will need to add a Slack app to
your workspace. Additional info can be found
[here](https://slack.com/help/articles/202035138-Add-apps-to-your-Slack-workspace).

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ArianneLavada <ariannelavada@gmail.com>
Co-authored-by: ArianneLavada <84357335+ArianneLavada@users.noreply.github.com>
Co-authored-by: ariannelavada@gmail.com <you@example.com>
10 months ago
Richie 99e5ee6a84
fix(vectorstores): incorrect import for mongodb atlas DriverInfo (#14060)
- **Description:** fix `import` issue for `mongodb atlas` vectore store
integration
  - **Issue:** none
  - **Dependencies:** none

while trying to follow official `langchain`'s [mongodb integration
guide](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas),
an import error will happen.

It's caused by incorrect import location:
- `from pymongo import DriverInfo` should be `from pymongo.driver_info
import DriverInfo`
- reference: [pymongo's DriverInfo
class](https://pymongo.readthedocs.io/en/stable/api/pymongo/driver_info.html#pymongo.driver_info.DriverInfo)

Thanks!
10 months ago
James Braza 3833882ab7
Removing extra `StdOutCallbackHandler` overridden methods (#14136)
Unnecessarily overridden methods:

- Give the idea the subclass is doing something special (when it isn't)
- Block CTRL-click to the actual method

This PR removes some unnecessarily overridden methods in
`StdOutCallbackHandler`

Supercedes https://github.com/langchain-ai/langchain/pull/12858
10 months ago
James Braza 052e23be3e
Added Python `logging` tracer (#14190)
This PR creates a logging handler and adds a simple unit test of it

Supercedes https://github.com/langchain-ai/langchain/pull/12862

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Bob Lin 62505043be
Closed #14069 (#14166)
### Description

Fix #14069

### Twitter handle

[lin_bob57617](https://twitter.com/lin_bob57617)
10 months ago
Yong woo Song 9938086df0
Fix Html2TextTransformer for shallow copy (#14197)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Hi,
There is some unintended behavior in Html2TextTransformer.
The current code is **directly modifying the original documents that are
passed as arguments to the function.**
Therefore, not only the return of the function but also the input
variables are being modified simultaneously.
**To resolve this, I added unit test code as well.**

reference link: [Shallow vs Deep Copying of Python
Objects](https://realpython.com/copying-python-objects/)

Thanks! ☺️
10 months ago
h3l 818252b1f8
Fix: (issue #14127) Volc Engine MaaS import error (#14194)
- **Description:** fix Volc Engine MaaS import error
- **Issue:** [the issue # it fixes (if
applicable),](https://github.com/langchain-ai/langchain/issues/14127)
  - **Dependencies:** None
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:**

Co-authored-by: lvzhong <lvzhong@bytedance.com>
10 months ago
Bagatur 0bdb434383
langchain[patch]: Release langchain 0.0.345 (#14184) 10 months ago
Bagatur 15c04a5670
core[patch]: Release 0.0.9 (#14182) 10 months ago
James Braza bdb6ae2ed3
core[patch]: `BaseTracer` helper method for `Run` lookup (#14139)
I observed the same run ID extraction logic is repeated many times in
`BaseTracer`.

This PR creates a helper method for DRY code.
10 months ago
Harutaka Kawamura 41ee3be95f
langchain[patch]: Support passing parameters to `llms.Databricks` and `llms.Mlflow` (#14100)
Before, we need to use `params` to pass extra parameters:

```python
from langchain.llms import Databricks

Databricks(..., params={"temperature": 0.0})
```

Now, we can directly specify extra params:

```python
from langchain.llms import Databricks

Databricks(..., temperature=0.0)
```
10 months ago
Abdul 82102c99b3
langchain[patch]: Running SQLDatabaseChain adds prefix "SQLQuery:\n" (#14058)
- **Issue:** https://github.com/langchain-ai/langchain/issues/12077

---------

Co-authored-by: Abdul Kader Maliyakkal <maliyakk@amazon.com>
10 months ago
Samuel Kemp fd781c89cc
langchain[minor]: add azure ai data document loader (#13404)
This PR adds an "Azure AI data" document loader, which allows Azure AI
users to load their registered data assets as a document object in
langchain.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
James Braza 24385a00de
core[minor], langchain[patch], experimental[patch]: Added missing `py.typed` to `langchain_core` (#14143)
See PR title.

From what I can see, `poetry` will auto-include this. Please let me know
if I am missing something here.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
quantum00549 f7c257553d
langchain[patch]: fixed a bug that was causing the streaming transfer to not work… (#10827)
… properly

Fixed a bug that was causing the streaming transfer to not work
properly.
 - **Description: 
1、The on_llm_new_token method in the streaming callback can now be
called properly in streaming transfer mode.
2、In streaming transfer mode, LLM can now correctly output the complete
response instead of just the first token.
- **Tag maintainer: @wangxuqi 
- **Twitter handle: @kGX7XJjuYxzX9Km

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Eugene Yurtsev 6d0209e0aa
Improve file system blob loader and generic loader (#14004)
* Add support for passing a specific file to the file system blob loader
* Allow specifying a class parameter for the parser for the generic
loader

```python

class AudioLoader(GenericLoader):
  @staticmethod
  def get_parser(**kwargs):
     return MyAudioParser(**kwargs):
```

The intent of the GenericLoader is to provide on-ramps from different
sources (e.g., web, s3, file system).

An alternative is to use pipelining syntax or creating a Pipeline

```
FileSystemBlobLoader(...) | MyAudioParser
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Lance Martin cbe4753e1a
Update Open CLIP embd (#14155)
Prior default model required a large amt of RAM and often crashed
Jupyter ntbk kernel.
10 months ago
Amyh102 b6d26d3f9f
infra[patch]: Add unit tests for Huggingface dataset loader (#14053)
- **Description:** Add unit tests for huggingface dataset loader and
sample huggingface dataset for future tests. Updates dependencies for
`datasets` module.
- Adds coverage for [previous pull
request](https://github.com/langchain-ai/langchain/pull/13864)
  - **Tag maintainer:** @hwchase17

---------

Co-authored-by: Amy Han <amyhan@Amys-Air.lan>
Co-authored-by: Amy Han <amyhan@Amys-MacBook-Air.local>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Govinda Totla 62a3473ac0
docs[patch]: add text_splitter.py test (#14025)
Description: Add HTMLHeaderTextSplitter unit test
Dependencies: none
10 months ago
axiangcoding 1b36ddf16c
docs[patch]: add deprecated note for ErnieChatBot (#14061)
- **Description:** just a little change of ErnieChatBot class
description, sugguesting user to use more suitable class
  - **Issue:** none,
  - **Dependencies:** none,
  - **Tag maintainer:** @baskaryan ,
  - **Twitter handle:** none
10 months ago
Devin Dahoon Kim 32da0a4d71
langchain[patch]: use async_embed_with_retry in _aget_len_safe_embeddings (#14110)
**Description**

`embed_with_retry` is for sync operations and not for async operations.
Use `async_embed_with_retry` for appropriate async operations.


I'm using `OpenAIEmbedding(http_client=httpx.AsyncClient())` with only
async operations.
However, I got an error when I use `embedding.aembed_documents` because
`embed_with_retry` uses sync OpenAI client with async http client.
10 months ago
lijie 371bcb7580
langchain[patch]: set maxsplit when parse python function docstring (#14121)
Description

when the desc of arg in python docstring contains ":", the
`_parse_python_function_docstring` will raise **ValueError: too many
values to unpack (expected 2)**.

A sample desc would be:
"""
Args: 
    error_arg: this is an arg with an additional ":" symbol
"""

So, set `maxsplit` parameter to fix it.
10 months ago
Harrison Chase ae646701c4
Harrison/ibm (#14133)
Co-authored-by: Mateusz Szewczyk <139469471+MateuszOssGit@users.noreply.github.com>
10 months ago
Eugene Yurtsev 943aa01c14
Improve indexing performance for Postgres (remote database) for refresh for async API (#14132)
This PR speeds up the indexing api on the async path by batching the uid
updates in the sql record manager (which may be remote).
10 months ago
William FH 528fc76d6a
Update Prompt Format Error (#14044)
The number of times I try to format a string (especially in lcel) is
embarrassingly high. Think this may be more actionable than the default
error message. Now I get nice helpful errors


```
KeyError: "Input to ChatPromptTemplate is missing variable 'input'.  Expected: ['input'] Received: ['dialogue']"
```
10 months ago
William FH 71c2e184b4
[Nits] Evaluation - Some Rendering Improvements (#14097)
- Improve rendering of aggregate results at the end
- flatten reference if present
10 months ago
Mark Scannell 9b0e46dcf0
Improve indexing performance for Postgres (remote database) for refresh (#14126)
**Description:** By combining the document timestamp refresh within a
single call to update(), this enables batching of multiple documents in
a single SQL statement. This is important for non-local databases where
tens of milliseconds has a huge impact on performance when doing
document-by-document SQL statements.
**Issue:** #11935 
**Dependencies:** None
**Tag maintainer:** @eyurtsev
10 months ago
Jacob Lee 3328507f11
langchain[patch], experimental[minor]: Adds OllamaFunctions wrapper (#13330)
CC @baskaryan @hwchase17 @jmorganca 

Having a bit of trouble importing `langchain_experimental` from a
notebook, will figure it out tomorrow

~Ah and also is blocked by #13226~

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Bagatur 4063bf144a
langchain[patch]: release 0.0.344 (#14095) 10 months ago
Bagatur efce352d6b
core[patch]: release 0.0.8 (#14086) 10 months ago
Harutaka Kawamura 0d08a692a3
langchain[minor]: Migrate mlflow and databricks classes to deployments APIs. (#13699)
## Description

Related to https://github.com/mlflow/mlflow/pull/10420. MLflow AI
gateway will be deprecated and replaced by the `mlflow.deployments`
module. Happy to split this PR if it's too large.

```
pip install git+https://github.com/langchain-ai/langchain.git@refs/pull/13699/merge#subdirectory=libs/langchain
```

## Dependencies

Install mlflow from https://github.com/mlflow/mlflow/pull/10420:

```
pip install git+https://github.com/mlflow/mlflow.git@refs/pull/10420/merge
```

## Testing plan

The following code works fine on local and databricks:

<details><summary>Click</summary>
<p>

```python
"""
Setup
-----
mlflow deployments start-server --config-path examples/gateway/openai/config.yaml
databricks secrets create-scope <scope>
databricks secrets put-secret <scope> openai-api-key --string-value $OPENAI_API_KEY

Run
---
python /path/to/this/file.py secrets/<scope>/openai-api-key
"""
from langchain.chat_models import ChatMlflow, ChatDatabricks
from langchain.embeddings import MlflowEmbeddings, DatabricksEmbeddings
from langchain.llms import Databricks, Mlflow
from langchain.schema.messages import HumanMessage
from langchain.chains.loading import load_chain
from mlflow.deployments import get_deploy_client
import uuid
import sys
import tempfile
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

###############################
# MLflow
###############################
chat = ChatMlflow(
    target_uri="http://127.0.0.1:5000", endpoint="chat", params={"temperature": 0.1}
)
print(chat([HumanMessage(content="hello")]))

embeddings = MlflowEmbeddings(target_uri="http://127.0.0.1:5000", endpoint="embeddings")
print(embeddings.embed_query("hello")[:3])
print(embeddings.embed_documents(["hello", "world"])[0][:3])

llm = Mlflow(
    target_uri="http://127.0.0.1:5000",
    endpoint="completions",
    params={"temperature": 0.1},
)
print(llm("I am"))

llm_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["adjective"],
        template="Tell me a {adjective} joke",
    ),
)
print(llm_chain.run(adjective="funny"))

# serialization/deserialization
with tempfile.TemporaryDirectory() as tmpdir:
    print(tmpdir)
    path = f"{tmpdir}/llm.yaml"
    llm_chain.save(path)
    loaded_chain = load_chain(path)
    print(loaded_chain("funny"))

###############################
# Databricks
###############################
secret = sys.argv[1]
client = get_deploy_client("databricks")

# External - chat
name = f"chat-{uuid.uuid4()}"
client.create_endpoint(
    name=name,
    config={
        "served_entities": [
            {
                "name": "test",
                "external_model": {
                    "name": "gpt-4",
                    "provider": "openai",
                    "task": "llm/v1/chat",
                    "openai_config": {
                        "openai_api_key": "{{" + secret + "}}",
                    },
                },
            }
        ],
    },
)
try:
    chat = ChatDatabricks(
        target_uri="databricks", endpoint=name, params={"temperature": 0.1}
    )
    print(chat([HumanMessage(content="hello")]))
finally:
    client.delete_endpoint(endpoint=name)

# External - embeddings
name = f"embeddings-{uuid.uuid4()}"
client.create_endpoint(
    name=name,
    config={
        "served_entities": [
            {
                "name": "test",
                "external_model": {
                    "name": "text-embedding-ada-002",
                    "provider": "openai",
                    "task": "llm/v1/embeddings",
                    "openai_config": {
                        "openai_api_key": "{{" + secret + "}}",
                    },
                },
            }
        ],
    },
)
try:
    embeddings = DatabricksEmbeddings(target_uri="databricks", endpoint=name)
    print(embeddings.embed_query("hello")[:3])
    print(embeddings.embed_documents(["hello", "world"])[0][:3])
finally:
    client.delete_endpoint(endpoint=name)

# External - completions
name = f"completions-{uuid.uuid4()}"
client.create_endpoint(
    name=name,
    config={
        "served_entities": [
            {
                "name": "test",
                "external_model": {
                    "name": "gpt-3.5-turbo-instruct",
                    "provider": "openai",
                    "task": "llm/v1/completions",
                    "openai_config": {
                        "openai_api_key": "{{" + secret + "}}",
                    },
                },
            }
        ],
    },
)
try:
    llm = Databricks(
        endpoint_name=name,
        model_kwargs={"temperature": 0.1},
    )
    print(llm("I am"))
finally:
    client.delete_endpoint(endpoint=name)


# Foundation model - chat
chat = ChatDatabricks(
    endpoint="databricks-llama-2-70b-chat", params={"temperature": 0.1}
)
print(chat([HumanMessage(content="hello")]))

# Foundation model - embeddings
embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en")
print(embeddings.embed_query("hello")[:3])

# Foundation model - completions
llm = Databricks(
    endpoint_name="databricks-mpt-7b-instruct", model_kwargs={"temperature": 0.1}
)
print(llm("hello"))
llm_chain = LLMChain(
    llm=llm,
    prompt=PromptTemplate(
        input_variables=["adjective"],
        template="Tell me a {adjective} joke",
    ),
)
print(llm_chain.run(adjective="funny"))

# serialization/deserialization
with tempfile.TemporaryDirectory() as tmpdir:
    print(tmpdir)
    path = f"{tmpdir}/llm.yaml"
    llm_chain.save(path)
    loaded_chain = load_chain(path)
    print(loaded_chain("funny"))

```

Output:

```
content='Hello! How can I assist you today?'
[-0.025058426, -0.01938856, -0.027781019]
[-0.025058426, -0.01938856, -0.027781019]
sorry, but I cannot continue the sentence as it is incomplete. Can you please provide more information or context?
Sure, here's a classic one for you:

Why don't scientists trust atoms?

Because they make up everything!
/var/folders/dz/cd_nvlf14g9g__n3ph0d_0pm0000gp/T/tmpx_4no6ad
{'adjective': 'funny', 'text': "Sure, here's a classic one for you:\n\nWhy don't scientists trust atoms?\n\nBecause they make up everything!"}
content='Hello! How can I assist you today?'
[-0.025058426, -0.01938856, -0.027781019]
[-0.025058426, -0.01938856, -0.027781019]
 a 23 year old female and I am currently studying for my master's degree
content="\nHello! It's nice to meet you. Is there something I can help you with or would you like to chat for a bit?"
[0.051055908203125, 0.007221221923828125, 0.003879547119140625]
[0.051055908203125, 0.007221221923828125, 0.003879547119140625]

hello back
 Well, I don't really know many jokes, but I do know this funny story...
/var/folders/dz/cd_nvlf14g9g__n3ph0d_0pm0000gp/T/tmp7_ds72ex
{'adjective': 'funny', 'text': " Well, I don't really know many jokes, but I do know this funny story..."}
```

</p>
</details>

The existing workflow doesn't break:

<details><summary>click</summary>
<p>

```python
import uuid

import mlflow
from mlflow.models import ModelSignature
from mlflow.types.schema import ColSpec, Schema


class MyModel(mlflow.pyfunc.PythonModel):
    def predict(self, context, model_input):
        return str(uuid.uuid4())


with mlflow.start_run():
    mlflow.pyfunc.log_model(
        "model",
        python_model=MyModel(),
        pip_requirements=["mlflow==2.8.1", "cloudpickle<3"],
        signature=ModelSignature(
            inputs=Schema(
                [
                    ColSpec("string", "prompt"),
                    ColSpec("string", "stop"),
                ]
            ),
            outputs=Schema(
                [
                    ColSpec(name=None, type="string"),
                ]
            ),
        ),
        registered_model_name=f"lang-{uuid.uuid4()}",
    )

# Manually create a serving endpoint with the registered model and run
from langchain.llms import Databricks

llm = Databricks(endpoint_name="<name>")
llm("hello")  # 9d0b2491-3d13-487c-bc02-1287f06ecae7
```

</p>
</details> 

## Follow-up tasks

(This PR is too large. I'll file a separate one for follow-up tasks.)

- Update `docs/docs/integrations/providers/mlflow_ai_gateway.mdx` and
`docs/docs/integrations/providers/databricks.md`.

---------

Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Jeremy Naccache a14cf87576
core[patch]: Add **kwargs to Langchain's dumps() to allow passing of json.dumps() … (#10628)
…parameters.

In Langchain's `dumps()` function, I've added a `**kwargs` parameter.
This allows users to pass additional parameters to the underlying
`json.dumps()` function, providing greater flexibility and control over
JSON serialization.

Many parameters available in `json.dumps()` can be useful or even
necessary in specific situations. For example, when using an Agent with
return_intermediate_steps set to true, the output is a list of
AgentAction objects. These objects can't be serialized without using
Langchain's `dumps()` function.

The issue arises when using the Agent with a language other than
English, which may contain non-ASCII characters like 'é'. The default
behavior of `json.dumps()` sets ensure_ascii to true, converting
`{"name": "José"}` into `{"name": "Jos\u00e9"}`. This can make the
output hard to read, especially in the case of intermediate steps in
agent logs.

By allowing users to pass additional parameters to `json.dumps()` via
Langchain's dumps(), we can solve this problem. For instance, users can
set `ensure_ascii=False` to maintain the original characters.

This update also enables users to pass other useful `json.dumps()`
parameters like `sort_keys`, providing even more flexibility.

The implementation takes into account edge cases where a user might pass
a "default" parameter, which is already defined by `dumps()`, or an
"indent" parameter, which is also predefined if `pretty=True` is set.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Yong woo Song f4d520ccb5
Fix .env file path in integration_test README.md (#14028)
<!-- Thank you for contributing to LangChain!



Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

### Description
Hello, 

The [integration_test
README](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests)
was indicating incorrect paths for the `.env.example` and `.env` files.

`tests/.env.example` ->`tests/integration_tests/.env.example`

While it’s a minor error, it could **potentially lead to confusion** for
the document’s readers, so I’ve made the necessary corrections.

Thank you! ☺️

### Related Issue
- https://github.com/langchain-ai/langchain/pull/2806
10 months ago
Rohan Dey 41a4c06a94
Added support for a Pandas DataFrame OutputParser (#13257)
**Description:**

Added support for a Pandas DataFrame OutputParser with format
instructions, along with unit tests and a demo notebook. Namely, we've
added the ability to request data from a DataFrame, have the LLM parse
the request, and then use that request to retrieve a well-formatted
response.

Within LangChain, it seamlessly integrates with language models like
OpenAI's `text-davinci-003`, facilitating streamlined interaction using
the format instructions (just like the other output parsers).

This parser structures its requests as
`<operation/column/row>[<optional_array_params>]`. The instructions
detail permissible operations, valid columns, and array formats,
ensuring clarity and adherence to the required format.

For example:

- When the LLM receives the input: "Retrieve the mean of `num_legs` from
rows 1 to 3."
- The provided format instructions guide the LLM to structure the
request as: "mean:num_legs[1..3]".

The parser processes this formatted request, leveraging the LLM's
understanding to extract the mean of `num_legs` from rows 1 to 3 within
the Pandas DataFrame.

This integration allows users to communicate requests naturally, with
the LLM transforming these instructions into structured commands
understood by the `PandasDataFrameOutputParser`. The format instructions
act as a bridge between natural language queries and precise DataFrame
operations, optimizing communication and data retrieval.

**Issue:**

- https://github.com/langchain-ai/langchain/issues/11532

**Dependencies:**

No additional dependencies :)

**Tag maintainer:**

@baskaryan 

**Twitter handle:**

No need. :)

---------

Co-authored-by: Wasee Alam <waseealam@protonmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Masanori Taniguchi 235bdb9fa7
Support Vald secure connection (#13269)
**Description:** 
When using Vald, only insecure grpc connection was supported, so secure
connection is now supported.
In addition, grpc metadata can be added to Vald requests to enable
authentication with a token.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
sudranga d1d693b2a7
Fix issue where response_if_no_docs_found is not implemented on async… (#13297)
Response_if_no_docs_found is not implemented in
ConversationalRetrievalChain for async code paths. Implemented it and
added test cases

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
AthulVincent 67c55cb5b0
Implemented MongoDB Atlas Self-Query Retriever (#13321)
# Description 
This PR implements Self-Query Retriever for MongoDB Atlas vector store.

I've implemented the comparators and operators that are supported by
MongoDB Atlas vector store according to the section titled "Atlas Vector
Search Pre-Filter" from
https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/.

Namely:
```
allowed_comparators = [
      Comparator.EQ,
      Comparator.NE,
      Comparator.GT,
      Comparator.GTE,
      Comparator.LT,
      Comparator.LTE,
      Comparator.IN,
      Comparator.NIN,
  ]

"""Subset of allowed logical operators."""
allowed_operators = [
    Operator.AND,
    Operator.OR
]
```
Translations from comparators/operators to MongoDB Atlas filter
operators(you can find the syntax in the "Atlas Vector Search
Pre-Filter" section from the previous link) are done using the following
dictionary:
```
map_dict = {
            Operator.AND: "$and",
            Operator.OR: "$or",
            Comparator.EQ: "$eq",
            Comparator.NE: "$ne",
            Comparator.GTE: "$gte",
            Comparator.LTE: "$lte",
            Comparator.LT: "$lt",
            Comparator.GT: "$gt",
            Comparator.IN: "$in",
            Comparator.NIN: "$nin",
        }
```

In visit_structured_query() the filters are passed as "pre_filter" and
not "filter" as in the MongoDB link above since langchain's
implementation of MongoDB atlas vector
store(libs\langchain\langchain\vectorstores\mongodb_atlas.py) in
_similarity_search_with_score() sets the "filter" key to have the value
of the "pre_filter" argument.
```
params["filter"] = pre_filter
```
Test cases and documentation have also been added.

# Issue
#11616 

# Dependencies
No new dependencies have been added.

# Documentation
I have created the notebook mongodb_atlas_self_query.ipynb outlining the
steps to get the self-query mechanism working.

I worked closely with [@Farhan-Faisal](https://github.com/Farhan-Faisal)
on this PR.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Josef Zoller c2e3963da4
Merriam-Webster Dictionary Tool (#12044)
# Description

We implemented a simple tool for accessing the Merriam-Webster
Collegiate Dictionary API
(https://dictionaryapi.com/products/api-collegiate-dictionary).

Here's a simple usage example:

```py
from langchain.llms import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType

llm = OpenAI()
tools = load_tools(["serpapi", "merriam-webster"], llm=llm) # Serp API gives our agent access to Google
agent = initialize_agent(
  tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
agent.run("What is the english word for the german word Himbeere? Define that word.")
```

Sample output:

```
> Entering new AgentExecutor chain...
 I need to find the english word for Himbeere and then get the definition of that word.
Action: Search
Action Input: "English word for Himbeere"
Observation: {'type': 'translation_result'}
Thought: Now I have the english word, I can look up the definition.
Action: MerriamWebster
Action Input: raspberry
Observation: Definitions of 'raspberry':

1. rasp-ber-ry, noun: any of various usually black or red edible berries that are aggregate fruits consisting of numerous small drupes on a fleshy receptacle and that are usually rounder and smaller than the closely related blackberries
2. rasp-ber-ry, noun: a perennial plant (genus Rubus) of the rose family that bears raspberries
3. rasp-ber-ry, noun: a sound of contempt made by protruding the tongue between the lips and expelling air forcibly to produce a vibration; broadly : an expression of disapproval or contempt
4. black raspberry, noun: a raspberry (Rubus occidentalis) of eastern North America that has a purplish-black fruit and is the source of several cultivated varieties —called also blackcap

Thought: I now know the final answer.
Final Answer: Raspberry is an english word for Himbeere and it is defined as any of various usually black or red edible berries that are aggregate fruits consisting of numerous small drupes on a fleshy receptacle and that are usually rounder and smaller than the closely related blackberries.

> Finished chain.
```

# Issue

This closes #12039.

# Dependencies

We added no extra dependencies.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Lara <63805048+larkgz@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Mohammad Mohtashim f3dd4a10cf
DROP BOX Loader Documentation Update (#14047)
- **Description:** Update the document for drop box loader + made the
messages more verbose when loading pdf file since people were getting
confused
  - **Issue:** #13952
  - **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17,

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Cheng (William) Huang a00db4b28f
Add multi-input Reddit search tool (#13893)
- **Description:** Added a tool called RedditSearchRun and an
accompanying API wrapper, which searches Reddit for posts with support
for time filtering, post sorting, query string and subreddit filtering.
  - **Issue:** #13891 
  - **Dependencies:** `praw` module is used to search Reddit
- **Tag maintainer:** @baskaryan , and any of the other maintainers if
needed
  - **Twitter handle:** None.

  Hello,

This is our first PR and we hope that our changes will be helpful to the
community. We have run `make format`, `make lint` and `make test`
locally before submitting the PR. To our knowledge, our changes do not
introduce any new errors.

Our PR integrates the `praw` package which is already used by
RedditPostsLoader in LangChain. Nonetheless, we have added integration
tests and edited unit tests to test our changes. An example notebook is
also provided. These changes were put together by me, @Anika2000,
@CharlesXu123, and @Jeremy-Cheng-stack

Thank you in advance to the maintainers for their time.

---------

Co-authored-by: What-Is-A-Username <49571870+What-Is-A-Username@users.noreply.github.com>
Co-authored-by: Anika2000 <anika.sultana@mail.utoronto.ca>
Co-authored-by: Jeremy Cheng <81793294+Jeremy-Cheng-stack@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Jawad Arshad 00a6e8962c
langchain[minor]: Add serpapi tools (#13934)
- **Description:** Added some of the more endpoints supported by serpapi
that are not suported on langchain at the moment, like google trends,
google finance, google jobs, and google lens
- **Issue:** [Add support for many of the querying endpoints with
serpapi #11811](https://github.com/langchain-ai/langchain/issues/11811)

---------

Co-authored-by: zushenglu <58179949+zushenglu@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Ian Xu <ian.xu@mail.utoronto.ca>
Co-authored-by: zushenglu <zushenglu1809@gmail.com>
Co-authored-by: KevinT928 <96837880+KevinT928@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
h3l dbaeb163aa
langchain[minor]: add volcengine endpoint as LLM (#13942)
- **Description:** Volc Engine MaaS serves as an enterprise-grade,
large-model service platform designed for developers. You can visit its
homepage at https://www.volcengine.com/docs/82379/1099455 for details.
This change will facilitate developers to integrate quickly with the
platform.
  - **Issue:** None
  - **Dependencies:** volcengine
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:** @he1v3tica

---------

Co-authored-by: lvzhong <lvzhong@bytedance.com>
10 months ago
Mohammad Ahmad 1600ebe6c7
langchain[patch]: Mask API key for ForeFrontAI LLM (#14013)
- **Description:** Mask API key for ForeFrontAI LLM and associated unit
tests
  - **Issue:** https://github.com/langchain-ai/langchain/issues/12165
  - **Dependencies:** N/A
  - **Tag maintainer:** @eyurtsev 
  - **Twitter handle:** `__mmahmad__`

I made the API key non-optional since linting required adding validation
for None, but the key is required per documentation:
https://python.langchain.com/docs/integrations/llms/forefrontai
10 months ago
yoch a0e859df51
langchain[patch]: fix cohere reranker init #12899 (#14029)
- **Description:** use post field validation for `CohereRerank`
  - **Issue:** #12899 and #13058
  - **Dependencies:** 
  - **Tag maintainer:** @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
123-fake-st 9bd6e9df36
update pdf document loaders' metadata source to url for online pdf (#13274)
- **Description:** Update 5 pdf document loaders in
`langchain.document_loaders.pdf`, to store a url in the metadata
(instead of a temporary, local file path) if the user provides a web
path to a pdf: `PyPDFium2Loader`, `PDFMinerLoader`,
`PDFMinerPDFasHTMLLoader`, `PyMuPDFLoader`, and `PDFPlumberLoader` were
updated.
- The updates follow the approach used to update `PyPDFLoader` for the
same behavior in #12092
- The `PyMuPDFLoader` changes required additional work in updating
`langchain.document_loaders.parsers.pdf.PyMuPDFParser` to be able to
process either an `io.BufferedReader` (from local pdf) or `io.BytesIO`
(from online pdf)
- The `PDFMinerPDFasHTMLLoader` change used a simpler approach since the
metadata is assigned by the loader and not the parser
  - **Issue:** Fixes #7034
  - **Dependencies:** None


```python
# PyPDFium2Loader example:
# old behavior
>>> from langchain.document_loaders import PyPDFium2Loader
>>> loader = PyPDFium2Loader('https://arxiv.org/pdf/1706.03762.pdf')
>>> docs = loader.load()
>>> docs[0].metadata
{'source': '/var/folders/7z/d5dt407n673drh1f5cm8spj40000gn/T/tmpm5oqa92f/tmp.pdf', 'page': 0}

# new behavior
>>> from langchain.document_loaders import PyPDFium2Loader
>>> loader = PyPDFium2Loader('https://arxiv.org/pdf/1706.03762.pdf')
>>> docs = loader.load()
>>> docs[0].metadata
{'source': 'https://arxiv.org/pdf/1706.03762.pdf', 'page': 0}
```
10 months ago
Toshish Jawale 6f64cb5078
Remove deprecated param and flexibility for prompt (#13310)
- **Description:** Updated to remove deprecated parameter penalty_alpha,
and use string variation of prompt rather than json object for better
flexibility. - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** N/A
  - **Tag maintainer:** @eyurtsev
  - **Twitter handle:** @symbldotai

---------

Co-authored-by: toshishjawale <toshish@symbl.ai>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Tomaz Bratanic 3eb391561b
langchain[minor]: Reduce the number of tokens required to describe a Cypher/Neo4j schema (#13851)
Instead of using JSON-like syntax to describe node and relationship
properties we changed to a shorter and more concise schema description

Old:

```
        Node properties are the following:
        [{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}]
        Relationship properties are the following:
        []
        The relationships are the following:
        ['(:Actor)-[:ACTED_IN]->(:Movie)']
```

New:

```
Node properties are the following:
Movie {name: STRING},Actor {name: STRING}
Relationship properties are the following:

The relationships are the following:
(:Actor)-[:ACTED_IN]->(:Movie)
```
10 months ago
Sauhaard 7ec4dbeb80
langchain[minor]: Add StackExchange API integration (#14002)
Implements
[#12115](https://github.com/langchain-ai/langchain/issues/12115)

Who can review?
@baskaryan , @eyurtsev , @hwchase17 

Integrated Stack Exchange API into Langchain, enabling access to diverse
communities within the platform. This addition enhances Langchain's
capabilities by allowing users to query Stack Exchange for specialized
information and engage in discussions. The integration provides seamless
interaction with Stack Exchange content, offering content from varied
knowledge repositories.

A notebook example and test cases were included to demonstrate the
functionality and reliability of this integration.

- Add StackExchange as a tool.
- Add unit test for the StackExchange wrapper and tool.
- Add documentation for the StackExchange wrapper and tool.

If you have time, could you please review the code and provide any
feedback as necessary! My team is welcome to any suggestions.

---------

Co-authored-by: Yuval Kamani <yuvalkamani@gmail.com>
Co-authored-by: Aryan Thakur <aryanthakur@Aryans-MacBook-Pro.local>
Co-authored-by: Manas1818 <79381912+manas1818@users.noreply.github.com>
Co-authored-by: aryan-thakur <61063777+aryan-thakur@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Bagatur d4405bc94e
langchain[patch]: Release 0.0.343 (#14037) 10 months ago
Yves Zumbühl 9c0ad0cebb
langchain[patch]: Improve HyDe with custom prompts and ability to supply the run_manager (#14016)
- **Description:** The class allows to only select between a few
predefined prompts from the paper. That is not ideal, since other use
cases might need a custom prompt. The changes made allow for this. To be
able to monitor those, I also added functionality to supply a custom
run_manager.
  - **Issue:** no issue, but a new feature,
  - **Dependencies:** none,
  - **Tag maintainer:** @hwchase17,
  - **Twitter handle:** @yvesloy

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Chad Norvell 1c4bfb8c5f
langchain[patch]: Mathpix PDF loader supports arbitrary extra params (#13950)
- **Description:** Support providing whatever extra parameters you want
to the Mathpix PDF loader API request.
  - **Issue:** #12773
  - **Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Unai Garay Maestre 9e2ae866c4
langchain[patch]: Adds progress bar to GooglePalmEmbeddings (#13812)
- **Description:** Adds a tqdm progress bar to GooglePalmEmbeddings when
embedding a list.
  - **Issue:** #13637
  - **Dependencies:** TQDM as a main dependency (instead of extra)


Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>

---------

Signed-off-by: ugm2 <unaigaraymaestre@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
David Norman a578076aea
Mask api key for Together LLM (#13981)
- **Description:** Add unit tests and mask api key for Together LLM
- **Issue:** the issue
https://github.com/langchain-ai/langchain/issues/12165 ,
  - **Dependencies:** N/A
  - **Tag maintainer:** ?,
  - **Twitter handle:** N/A

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
10 months ago
Johnny 6463d2d0bd
small fix matching engine AttributeError - object has no attribute (#13763)
This PR is fixing an attributeError: object endpoint has no attribute
"_public_match_client" when using gcp matching engine with private VPC
network.

@baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
Amyh102 750485eaa8
Add object parsing functionality (#13864)
* **Description:** Parses huggingface dataset Sequence objects into
strings for Document loading.
* **Issue:** Fixes #10674 
* **Tag maintainter:** @baskaryan @eyurtsev

---------

Co-authored-by: Amy Han <amyhan@Amys-Air.lan>
Co-authored-by: Amy Han <amyhan@Amys-MacBook-Air.local>
10 months ago
ggeutzzang 981f78f920
Fix: (issue #13825) Getting an error with DallEAPIWrapper (#13874)
- **Description:** As of OpenAI's Python package 1.0, the existing
DallEAPIWrapper does not work correctly, so the example in the LangChain
Documentation link below does not work either.

https://python.langchain.com/docs/integrations/tools/dalle_image_generator
Also, since OpenAI only supports DALL-E version 2 or version 3, I
modified the DallEAPIWrapper to support it.

  - **Issue:** #13825 

  - **Twitter handle:** ggeutzzang
10 months ago
Kunal 74045bf5c0
max length attribute for spacy splitter for large docs (#13875)
For large size documents spacy splitter doesn't work it throws an error
as shown in below screenshot.
Reason its default max_length is 1000000 and there is no option to
increase it. So i added it in this PR.


![image](https://github.com/langchain-ai/langchain/assets/73680423/613625c3-0e21-4834-9aad-2a73cf56eecc)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Wang Wei fe9341a29c
feat: Add ERNIE-Bot-8K model support for ErnieBotChat. (#13716)
- **Description:** According to the document
https://cloud.baidu.com/doc/WENXINWORKSHOP/s/6lp69is2a, add ERNIE-Bot-8K
model support for ErnieBotChat.
- **Dependencies:** Before using the ERNIE-Bot-8K, you should have the
model's access authority.
10 months ago
Burak Ömür 0e462b72ef
Update openai/create_llm_result function to consider kwargs (#13815)
Replace this entire comment with:
- **Description:** updates `create_llm_result` function within
`openai.py` to consider latest `params`,
  - **Issue:** #8928
  - **Dependencies:** -,
  - **Tag maintainer:** -
  - **Twitter handle:** [burkomr](https://twitter.com/burkomr)

<!-- If no one reviews your PR within a few days, please @-mention one
of @baskaryan, @eyurtsev, @hwchase17. -->

---------

Co-authored-by: Burak Ömür <burakomur@retorio.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
10 months ago
chyroc f97ab84c6b
Merge pull request #13907
* feat: mask api_key for jina
10 months ago
nhywieza 9b86fb3fcb
secretStr for baichuan chat model api key (#13946)
Merge pull request #13946
* secretStr for baichuan chat model api key
10 months ago
卢靖轩 aff1dba252
Merge pull request #13945
* feat: mask api key for nlpcloud
10 months ago
Leonid Kuligin 85bb3a418c
Switched VertexAI models from preview (#13657)
Replace this entire comment with:
- **Description:** VertexAI models are now GA, moved away from using
preview ones from the SDK
  - **Issue:** #13606

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
10 months ago
Erick Friis 5eca1bd93f
Library Licenses (#13300)
Same change as #8403 but in other libs

also updates (c) LangChain Inc. instead of @hwchase17
10 months ago
Bagatur 14799b139a
infra[patch]: add base deps and fix docs lint (#13998) 10 months ago
Théo LEBRUN 926d4cfda7
Set default region from boto3 session for Bedrock (#13694)
- **Description:** Set default region from boto3 session for Bedrock 
- **Issue:** #13683
10 months ago
Snow 1a33e5b500
Repair Wikipedia document loader `load_max_docs` and improve test coverage. (#13769)
**Description:** 

Repair Wikipedia document loader `load_max_docs` and improve test
coverage.

**Issue:** 

The Wikipedia document loader was not respecting the `load_max_docs`
paramater (not reported) and would always return a maximum of 10
documents. This is because the API wrapper (in `utilities/wikipedia.py`)
wasn't passing `top_k_results` to the underlying [Wikipedia
library](https://wikipedia.readthedocs.io/en/latest/code.html#module-wikipedia).
By default this library returns 10 results.

The default number of results for the document loader has been reduced
from 100 to 25. This is because loading 100 results takes a very long
time and is an inconvenient default. It should possibly be 10.

In addition, the documentation for the loader reported that there was a
hard limit (300) on the number of documents returned. In actuality 300
is the maximum Wikipedia query character length set by the API wrapper.

Tests have been added for the document loader (previously missing) and
to test the correct numbers of documents are being returned by each
class, both by default, and when overridden. Also repaired is the
`assert_docs` test which has been updated to correctly test for the
default metadata (which includes `source` in recent releases).

**Dependencies:** 
nil

**Tag maintainer:**
@leo-gan

**Twitter handle:**
@queenvictoria
10 months ago
Bob Lin 04c4878306
Remove `python_repl` from _BASE_TOOLS (#13962)
### **Description:**

Previously `python_repl` was a built-in tool, but now it has been moved
to `langchain_experimental`.

When I use `load_tools` I get an error:

```python
In [1]: from langchain.agents import load_tools

In [2]: load_tools(["python_repl"])
---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[2], line 1
----> 1 load_tools(["python_repl"])

File ~/workspace/langchain/libs/langchain/langchain/agents/load_tools.py:530, in load_tools(tool_names, llm, callbacks, **kwargs)
    528     tool_names.extend(requests_method_tools)
    529 elif name in _BASE_TOOLS:
--> 530     tools.append(_BASE_TOOLS[name]())
    531 elif name in _LLM_TOOLS:
    532     if llm is None:

File ~/workspace/langchain/libs/langchain/langchain/agents/load_tools.py:84, in _get_python_repl()
     83 def _get_python_repl() -> BaseTool:
---> 84     raise ImportError(
     85         "This tool has been moved to langchain experiment. "
     86         "This tool has access to a python REPL. "
     87         "For best practices make sure to sandbox this tool. "
     88         "Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md "
     89         "To keep using this code as is, install langchain experimental and "
     90         "update relevant imports replacing 'langchain' with 'langchain_experimental'"
     91     )

ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
```

In this case, it will be very confusing. I think it is no longer a
built-in tool now, so it can be removed from `_BASE_TOOLS`

### **Issue:** 

https://github.com/langchain-ai/langchain/issues/13858,
https://github.com/langchain-ai/langchain/issues/13859,
https://github.com/langchain-ai/langchain/issues/13856
### **Twitter handle:** 

[lin_bob57617](https://twitter.com/lin_bob57617)
10 months ago
Leonid Ganeline 52eee458bb
renamed `google_vertex_ai_vector_search` notebook (#13484)
The `integrations/vectorstores/matchingengine.ipynb` example has the
"Google Vertex AI Vector Search" title. This place this Title in the
wrong order in the ToC (it is sorted by the file name).
- Renamed `integrations/vectorstores/matchingengine.ipynb` into
`integrations/vectorstores/google_vertex_ai_vector_search.ipynb`.
- Updated a correspondent comment in docstring
- Rerouted old URL to a new URL

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Leonid Ganeline bf5787f58b
experimental[patch]: fixed namespace bug (#13585)
It was :
`from langchain.schema.prompts import BasePromptTemplate`
but because of the breaking change in the ns, it is now
`from langchain.schema.prompt_template import BasePromptTemplate`

This bug prevents building the API Reference for the langchain_experimental
10 months ago
Taqi Jaffri 144710ad9a
langchain[minor]: Updated DocugamiLoader, includes breaking changes (#13265)
There are the following main changes in this PR:

1. Rewrite of the DocugamiLoader to not do any XML parsing of the DGML
format internally, and instead use the `dgml-utils` library we are
separately working on. This is a very lightweight dependency.
2. Added MMR search type as an option to multi-vector retriever, similar
to other retrievers. MMR is especially useful when using Docugami for
RAG since we deal with large sets of documents within which a few might
be duplicates and straight similarity based search doesn't give great
results in many cases.

We are @docugami on twitter, and I am @tjaffri

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
10 months ago
Bagatur a20e8f8bb0
experimental[patch]: release 0.0.43 (#13570) 10 months ago
Bagatur d8fe987ef5
langchain[patch]: release 0.0.342 (#13992) 10 months ago
david qiu 9fb6805be4
langchain[minor]: Add retriever for Knowledge Bases for Amazon Bedrock (#13980)
- **Description:** Adds a retriever implementation for [Knowledge Bases
for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/), a
new service announced at AWS re:Invent, shortly before this PR was
opened. This depends on the `bedrock-agent-runtime` service, which will
be included in a future version of `boto3` and of `botocore`. We will
open a follow-up PR documenting the minimum required versions of `boto3`
and `botocore` after that information is available.
  - **Issue:** N/A
  - **Dependencies:** `boto3>=1.33.2, botocore>=1.33.2`
  - **Tag maintainer:** @baskaryan
  - **Twitter handles:** `@pjain7` `@dead_letter_q`

This PR includes a documentation notebook under
`docs/docs/integrations/retrievers`, which I (@dlqqq) have verified
independently.

EDIT: `bedrock-agent-runtime` service is now included in
`boto3>=1.33.2`:
5cf793f493

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Bagatur 1aed2d1f08
core[patch]: release 0.0.7 (#13989) 10 months ago
David Duong eb67f07e32
Track RunnableAssign as a separate run trace (#13972)
Addressing incorrect order being sent to callbacks / tracers, due to the
nature of threading

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
10 months ago
Nuno Campos 0f255bb6c4
In Runnable.stream_log build up final_output from adding output chunks (#12781)
Add arg to omit streamed_output list, in cases where final_output is
enough this saves bandwidth

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
Nuno Campos 970fe23feb
Fixes for opengpts release (#13960) 10 months ago
David Duong 947daaf833
Exclude Bedrock client and credentials_profile_name fields from serialisation (#13603) 10 months ago
Bagatur 48fbc5513d
infra[patch], langchain[patch]: fix test deps and upper bound langchain dep on core(#13984) 10 months ago
Stefano Lottini 1fd724293b
Astra DB vector store, move constructor docstring to class docstring (#13784)
This PR rearranges the docstring for the `AstraDB` vector store class so
as to have all useful information in the _class_ docstring for ease of
reading.

(incidentally, due to an oversight, the docstring that was in the
constructor ended up buried below some lines of code, thereby
disappearing altogether from accessibility. Apologies.)
10 months ago
Johannes Foulds fc40bd4cdb
AnthropicFunctions function_call compatibility (#13901)
- **Description:** Updates to `AnthropicFunctions` to be compatible with
the OpenAI `function_call` functionality.
- **Issue:** The functionality to indicate `auto`, `none` and a forced
function_call was not completely implemented in the existing code.
  - **Dependencies:** None
- **Tag maintainer:** @baskaryan , and any of the other maintainers if
needed.
  - **Twitter handle:** None

I have specifically tested this functionality via AWS Bedrock with the
Claude-2 and Claude-Instant models.
10 months ago
mengjincn 05ea4fd37d
fix merge None value and non None value error (#13703)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
Ali Orozgani 32d794f5a3
iMessage loader: implement message content extraction from attributed… (#13634)
- **Description:** We are adding functionality to extract message
content from the `attributedBody` field of the database, in case the
content is not in the `text` field.
  - **Issue:** Closes #13326 and #10680 
  - **Dependencies:** None.
  - **Tag maintainer:** @eyurtsev, @hwchase17

---------

Co-authored-by: onotate <johnp.pham@mail.utoronto.ca>
10 months ago
William FH e5256bcb69
[Evals] Add Project Tags (#13982)
Add them to project extra
10 months ago
Nuno Campos e0bcc98436
infra[patch]: Use langchain core in-tree as a dev dependency (#13957)
Using the published version means master is broken for contributors
whenever we make changes in one lib that depend on the other.
10 months ago
unifyh 2703a1b061
Fix `MarkdownHeaderTextSplitter` not recognizing tilde-fenced code blocks (#13511)
- **Description:** Previously `MarkdownHeaderTextSplitter` did not
consider tilde-fenced code blocks
(https://spec.commonmark.org/0.30/#fenced-code-blocks). This PR fixes
that.
   ````md
   # Bug caused by previous implementation:
   ~~~py
   foo()
   # This is a comment that would be considered header
   bar()
   ~~~
   ````
 - **Tag maintainer:** @baskaryan
10 months ago
Leonid Ganeline 7929b26017
office365 toolkit bug fixes (#13618)
Several bug fixes:
- emails: instead of `bcc` the `cc` is used.
- errors in the truncation descriptions
- no truncation of the `message_search`
Several updates:
- generalized UTC format 
- truncation limit can be changed now in _call()
10 months ago
William FH 60309341bd
Eval Error Key (#13974) 10 months ago
Erick Friis f9bef600f1
RELEASE: core 0.0.7 (#13973) 10 months ago
Nicolas Bondoux e17edc4d0b
RunnableLambda: create afunc instance from func when not provided (#13408)
Fixes #13407.

This workaround consists in letting the RunnableLambda create its
self.afunc from its self.func when self.afunc is not provided; the
change has no dependency.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
10 months ago
Nuno Campos 391f200eaa
Implement stream() and astream() for agents (#12783)
```
---- chunk 1
{'actions': [AgentActionMessageLog(tool='Search', tool_input="Leo DiCaprio's current girlfriend", log="\nInvoking: `Search` with `Leo DiCaprio's current girlfriend`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}})])],
 'messages': [AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}})]}
---- chunk 2
{'messages': [FunctionMessage(content="According to Us, the 48-year-old actor is now “exclusively” dating Italian model Vittoria Ceretti. A source told Us that DiCaprio is “completely smitten” with Ceretti, and their relationship is “going so well that Leo's actually being exclusive.”", name='Search')],
 'steps': [AgentStep(action=AgentActionMessageLog(tool='Search', tool_input="Leo DiCaprio's current girlfriend", log="\nInvoking: `Search` with `Leo DiCaprio's current girlfriend`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}})]), observation="According to Us, the 48-year-old actor is now “exclusively” dating Italian model Vittoria Ceretti. A source told Us that DiCaprio is “completely smitten” with Ceretti, and their relationship is “going so well that Leo's actually being exclusive.”")]}
---- chunk 3
{'actions': [AgentActionMessageLog(tool='Search', tool_input='Vittoria Ceretti age', log='\nInvoking: `Search` with `Vittoria Ceretti age`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}})])],
 'messages': [AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}})]}
---- chunk 4
{'messages': [FunctionMessage(content='25 years', name='Search')],
 'steps': [AgentStep(action=AgentActionMessageLog(tool='Search', tool_input='Vittoria Ceretti age', log='\nInvoking: `Search` with `Vittoria Ceretti age`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}})]), observation='25 years')]}
---- chunk 5
{'actions': [AgentActionMessageLog(tool='Calculator', tool_input='25^0.43', log='\nInvoking: `Calculator` with `25^0.43`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}})])],
 'messages': [AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}})]}
---- chunk 6
{'messages': [FunctionMessage(content='Answer: 3.991298452658078', name='Calculator')],
 'steps': [AgentStep(action=AgentActionMessageLog(tool='Calculator', tool_input='25^0.43', log='\nInvoking: `Calculator` with `25^0.43`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}})]), observation='Answer: 3.991298452658078')]}
---- chunk 7
{'messages': [AIMessage(content="Leonardo DiCaprio's current girlfriend is the Italian model Vittoria Ceretti, who is 25 years old. Her age raised to the 0.43 power is approximately 3.99.")],
 'output': "Leonardo DiCaprio's current girlfriend is the Italian model "
           'Vittoria Ceretti, who is 25 years old. Her age raised to the 0.43 '
           'power is approximately 3.99.'}
---- final
{'actions': [AgentActionMessageLog(tool='Search', tool_input="Leo DiCaprio's current girlfriend", log="\nInvoking: `Search` with `Leo DiCaprio's current girlfriend`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}})]),
             AgentActionMessageLog(tool='Search', tool_input='Vittoria Ceretti age', log='\nInvoking: `Search` with `Vittoria Ceretti age`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}})]),
             AgentActionMessageLog(tool='Calculator', tool_input='25^0.43', log='\nInvoking: `Calculator` with `25^0.43`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}})])],
 'messages': [AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}}),
              FunctionMessage(content="According to Us, the 48-year-old actor is now “exclusively” dating Italian model Vittoria Ceretti. A source told Us that DiCaprio is “completely smitten” with Ceretti, and their relationship is “going so well that Leo's actually being exclusive.”", name='Search'),
              AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}}),
              FunctionMessage(content='25 years', name='Search'),
              AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}}),
              FunctionMessage(content='Answer: 3.991298452658078', name='Calculator'),
              AIMessage(content="Leonardo DiCaprio's current girlfriend is the Italian model Vittoria Ceretti, who is 25 years old. Her age raised to the 0.43 power is approximately 3.99.")],
 'output': "Leonardo DiCaprio's current girlfriend is the Italian model "
           'Vittoria Ceretti, who is 25 years old. Her age raised to the 0.43 '
           'power is approximately 3.99.',
 'steps': [AgentStep(action=AgentActionMessageLog(tool='Search', tool_input="Leo DiCaprio's current girlfriend", log="\nInvoking: `Search` with `Leo DiCaprio's current girlfriend`\n\n\n", message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Leo DiCaprio\'s current girlfriend"\n}'}})]), observation="According to Us, the 48-year-old actor is now “exclusively” dating Italian model Vittoria Ceretti. A source told Us that DiCaprio is “completely smitten” with Ceretti, and their relationship is “going so well that Leo's actually being exclusive.”"),
           AgentStep(action=AgentActionMessageLog(tool='Search', tool_input='Vittoria Ceretti age', log='\nInvoking: `Search` with `Vittoria Ceretti age`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Search', 'arguments': '{\n  "__arg1": "Vittoria Ceretti age"\n}'}})]), observation='25 years'),
           AgentStep(action=AgentActionMessageLog(tool='Calculator', tool_input='25^0.43', log='\nInvoking: `Calculator` with `25^0.43`\n\n\n', message_log=[AIMessageChunk(content='', additional_kwargs={'function_call': {'name': 'Calculator', 'arguments': '{\n  "__arg1": "25^0.43"\n}'}})]), observation='Answer: 3.991298452658078')]}
```
10 months ago
Michael Feil 686162670e
langchain[minor]: Adding `infinity` embedding integration. (#13928)
This adds integation to https://github.com/michaelfeil/infinity. Users
requested it in https://github.com/michaelfeil/infinity/issues/36
@saatvikshah

Follows my implementation of gradient.ai.

Feedback 1: Well done - I love your CI / repo / poetry setup - I adapted
a lot in https://github.com/michaelfeil/infinity.
Feedback 2: Not so good: The openai integration contains to much reverse
engineering - in general projects such as michaelfeil/infinity and
huggingface/text-embeddings-inference are compatible to the `pip install
openai` package.

Reverse engineering like this one is really hindering the use for me:

8e88ba16a8/libs/langchain/langchain/embeddings/openai.py (L347)

8e88ba16a8/libs/langchain/langchain/embeddings/openai.py (L351)
- it is about preventing 3rd party providers to use the same url + uses
interfaces of openai, that are not publically documented.
10 months ago
Bagatur 10a6e7cbb6
langchain[patch], core[patch]: Make common utils public (#13932)
- rename `langchain_core.chat_models.base._generate_from_stream` -> `generate_from_stream`
- rename `langchain_core.chat_models.base._agenerate_from_stream` -> `agenerate_from_stream`
- export `langchain_core.utils.utils.build_extra_kwargs` from `langchain_core.utils`
10 months ago
Oleksandr Yaremchuk c0277d06e8
experimental[patch] Update prompt injection model (#13930)
- **Description:** Existing model used for Prompt Injection is quite
outdated but we fine-tuned and open-source a new model based on the same
model deberta-v3-base from Microsoft -
[laiyer/deberta-v3-base-prompt-injection](https://huggingface.co/laiyer/deberta-v3-base-prompt-injection).
It supports more up-to-date injections and less prone to
false-positives.
  - **Dependencies:** No
  - **Tag maintainer:** -
  - **Twitter handle:** @alex_yaremchuk

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Bob Lin e6ebde9688
experimental[patch]: Add experimental.agent imports (#13839)
- **Description:** The experimental package needs to be compatible with
the usage of importing agents

For example, if i use `from langchain.agents import
create_pandas_dataframe_agent`, running the program will prompt the
following information:

```
Traceback (most recent call last):
   File "/Users/dongwm/test/main.py", line 1, in <module>
     from langchain.agents import create_pandas_dataframe_agent
   File "/Users/dongwm/test/venv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 87, in __getattr__
     raise ImportError(
ImportError: create_pandas_dataframe_agent has been moved to langchain experimental. See https://github.com/langchain-ai/langchain/discussions/11680 for more information.
Please update your import statement from: `langchain.agents.create_pandas_dataframe_agent` to `langchain_experimental.agents.create_pandas_dataframe_agent`.
```

But when I changed to `from langchain_experimental.agents import
create_pandas_dataframe_agent`, it was actually wrong:

```python
Traceback (most recent call last):
  File "/Users/dongwm/test/main.py", line 2, in <module>
    from langchain_experimental.agents import create_pandas_dataframe_agent
ImportError: cannot import name 'create_pandas_dataframe_agent' from 'langchain_experimental.agents' (/Users/dongwm/test/venv/lib/python3.11/site-packages/langchain_experimental/agents/__init__.py)
```

I should use `from langchain_experimental.agents.agent_toolkits import
create_pandas_dataframe_agent`. In order to solve the problem and make
it compatible, I added additional import code to the
langchain_experimental package. Now it can be like this Used `from
langchain_experimental.agents import create_pandas_dataframe_agent`

  - **Twitter handle:** [lin_bob57617](https://twitter.com/lin_bob57617)
10 months ago
Tyler Titsworth afcfa2a5e7
langchain[patch]: Add progress bar option to OllamaEmbeddings (#13882)
- **Description:** Adds a tqdm progress bar to OllamaEmbeddings when
embedding a list.
- **Issue:** Related to #13637, but extended to Ollama.
- **Dependencies:** `tqdm` made a necessary dependency.

Thanks to @ugm2 for helping identify a common problem. Embeddings take a
very long time to finish on local machines, and require a progress bar
to help identify if one should even attempt the workload.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
jeremyb-data cd77fba562
Improvement: Weaviate multitenant adddocs (#13827)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description:** Added a line to pass the tenant parameter to
add_data_object
  - **Issue:** An extra line added from the fix for #9956
  - **Dependencies:** n/a
  - **Tag maintainer:** @baskaryan 

Tested locally, works as expected with the line change.

---------

Co-authored-by: Simon Dai <simon6752@gmail.com>
10 months ago
jiangying 3e30cd8261
NIT: comment typo (#13817) 10 months ago
Assaf Toledo ba62ff89cc
BUGFIX: Support for elastic indices that don't return 'metadata' in '_source' (#13903)
Description: Some Elastic indexes do not return a 'metadata' field in
'_source'. However, prior to this PR, the code assumed there always is a
'metadata' field. This PR adds support for cases where the field is
missing by adding it manually.

Issue: #13869
10 months ago
Enric Soler Rastrollo c156d0281a
BUGFIX: Use embedding key in azure_cosmos_db index creation (#13919)
Description: Implement embedding key parametrisation
Issue: https://github.com/langchain-ai/langchain/issues/13918
Dependencies: None
Tag maintainer: @hwchase17 @izzymsft
Twitter handle:@MaddogoS
10 months ago
Bagatur ac67422a3d
IMPROVEMENT: import Document from core (#13905) 10 months ago
chyroc 886bc2d50a
IMPROVEMENT: fix qianfan validate_environment typo (#13908) 10 months ago
Chengzu Ou 4b8e053fe8
FEATURE: Add Databricks Vector Search as a new vector store (#13621)
**Description:**
This PR adds Databricks Vector Search as a new vector store in
LangChain.

- [x] Add `DatabricksVectorSearch` in `langchain/vectorstores/`
- [x] Unit tests
- [x] Add
[`databricks-vectorsearch`](https://pypi.org/project/databricks-vectorsearch/)
as a new optional dependency

We ran the following checks:
- `make format` passed  
- `make lint` failed but the failures were caused by other files
    + Files touched by this PR passed the linter  
- `make test` passed  
- `make coverage` failed but the failures were caused by other files.
Tests added by or related to this PR all passed
+ langchain/vectorstores/databricks_vector_search.py test coverage 94% 
- `make spell_check` passed  

The example notebook and updates to the [provider's documentation
page](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/providers/databricks.md)
will be added later in a separate PR.

**Dependencies:**
Optional dependency:
[`databricks-vectorsearch`](https://pypi.org/project/databricks-vectorsearch/)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Leonid Kuligin 25387db432
BUFIX: add support for various OSS images from Vertex Model Garden (#13917)
- **Description:** add support for various OSS images from Model
Garden
  - **Issue:** #13370
10 months ago
Eugene Yurtsev e186637921
Document Runnable Binding (#13927)
Document runnable binding
10 months ago
Bagatur 46b3311190
RELEASE: 0.0.341 (#13926) 10 months ago
umair mehmood b3e08f9239
improvement: fix chat prompt loading from config (#13818)
Add loader for loading chat prompt from config file.

fixed: #13667

@efriis 
@baskaryan
10 months ago
Nuno Campos 8a3e0c9afa
Add option to prefix config keys in configurable_alts (#13714) 10 months ago
ggeutzzang 3749af79ae
DOCS: fixed error in the docstring of RunnablePassthrough class (#13843)
This pull request addresses an issue found in the example code within
the docstring of `libs/core/langchain_core/runnables/passthrough.py`

The original code snippet caused a `NameError` due to the missing import
of `RunnableLambda`. The error was as follows:
```
     12     return "completion"
     13 
---> 14 chain = RunnableLambda(fake_llm) | {
     15     'original': RunnablePassthrough(), # Original LLM output
     16     'parsed': lambda text: text[::-1] # Parsing logic

NameError: name 'RunnableLambda' is not defined
```
To resolve this, I have modified the example code to include the
necessary import statement for `RunnableLambda`. Additionally, I have
adjusted the indentation in the code snippet to ensure consistency and
readability.

The modified code now successfully defines and utilizes
`RunnableLambda`, ensuring that users referencing the docstring will
have a functional and clear example to follow.

There are no related GitHub issues for this particular change.

Modified Code:
```python
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
from langchain_core.runnables import RunnableLambda

runnable = RunnableParallel(
    origin=RunnablePassthrough(),
    modified=lambda x: x+1
)

runnable.invoke(1) # {'origin': 1, 'modified': 2}

def fake_llm(prompt: str) -> str: # Fake LLM for the example
    return "completion"

chain = RunnableLambda(fake_llm) | {
    'original': RunnablePassthrough(), # Original LLM output
    'parsed': lambda text: text[::-1] # Parsing logic
}

chain.invoke('hello') # {'original': 'completion', 'parsed': 'noitelpmoc'}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Dylan Williams 1983a39894
FEATURE: Add OneNote document loader (#13841)
- **Description:** Added OneNote document loader
  - **Issue:** #12125
  - **Dependencies:** msal

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Tomaz Bratanic 1ad65f7a98
BUGFIX: Fix bugs with Cypher validation (#13849)
Fixes https://github.com/langchain-ai/langchain/issues/13803. Thanks to
@sakusaku-rich
10 months ago
Harrison Chase 6a35831128
BUGFIX: export more types (#13886)
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Yusuf Khan 935f78c944
FEATURE: Add retriever for Outline (#13889)
- **Description:** Added a retriever for the Outline API to ask
questions on knowledge base
  - **Issue:** resolves #11814
  - **Dependencies:** None
  - **Tag maintainer:** @baskaryan
10 months ago
Bagatur 0efa59cbb8
RELEASE: 0.0.339rc3 (#13852) 10 months ago
Bagatur 7222c42077
RELEASE: core 0.0.6 (#13853) 10 months ago
raelix c172605ea6
IMPROVEMENT: Added title metadata to GoogleDriveLoader for optional File Loaders (#13832)
- **Description:** Simple change, I just added title metadata to
GoogleDriveLoader for optional File Loaders
  - **Dependencies:** no dependencies
  - **Tag maintainer:** @hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Stefano Lottini 19c68c7652
FEATURE: Astra DB, LLM cache classes (exact-match and semantic cache) (#13834)
This PR provides idiomatic implementations for the exact-match and the
semantic LLM caches using Astra DB as backend through the database's
HTTP JSON API. These caches require the `astrapy` library as dependency.

Comes with integration tests and example usage in the `llm_cache.ipynb`
in the docs.

@baskaryan this is the Astra DB counterpart for the Cassandra classes
you merged some time ago, tagging you for your familiarity with the
topic. Thank you!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Stefano Lottini 272df9dcae
Astra DB, chat message history (#13836)
This PR adds a chat message history component that uses Astra DB for
persistence through the JSON API.
The `astrapy` package is required for this class to work.

I have added tests and a small notebook, and updated the relevant
references in the other docs pages.

(@rlancemartin this is the counterpart of the Cassandra equivalent class
you so helpfully reviewed back at the end of June)

Thank you!
10 months ago
Bagatur 58f7e109ac
BUGFIX: Add import types and typevars from core (#13829) 10 months ago
Bagatur 751226e067
bump 0.0.339rc2 (#13787) 10 months ago
Bagatur 300ff01824
RELEASE: core 0.0.5 (#13786) 10 months ago
Bagatur 72c108b003
IMPROVEMENT: filter global warnings properly (#13754) 10 months ago
William FH 163bf165ed
Add Batch Size kwarg to the llm start callback (#13483)
So you can more easily use the token counts directly from the API
endpoint for batch size of 1
10 months ago
Bagatur 0be515f720
RELEASE: 0.0.339rc1 (#13746) 10 months ago
Bagatur 2bc5bd67f7
RELEASE: core 0.0.4 (#13745) 10 months ago
Bagatur 32d087fcb8
REFACTOR: combine core documents files (#13733) 10 months ago
William FH 5b90fe5b1c
Fix locking (#13725) 10 months ago
Bagatur 16af282429
BUGFIX: add prompt imports for backwards compat (#13702) 10 months ago
Bagatur e327bb4ba4
IMPROVEMENT: Conditionally import core type hints (#13700) 10 months ago
dandanwei d47ee1ae79
BUGFIX: redis vector store overwrites falsey metadata (#13652)
- **Description:** This commit fixed the problem that Redis vector store
will change the value of a metadata from 0 to empty when saving the
document, which should be an un-intended behavior.
  - **Issue:** N/A
  - **Dependencies:** N/A
10 months ago
Bagatur a21e84faf7
BUGFIX: llm backwards compat imports (#13698) 10 months ago
Yujie Qian ace9e64d62
IMPROVEMENT: VoyageEmbeddings embed_general_texts (#13620)
- **Description:** add method embed_general_texts in VoyageEmebddings to
support input_type
  - **Issue:** 
  - **Dependencies:** 
  - **Tag maintainer:** 
  - **Twitter handle:** @Voyage_AI_
10 months ago
tanujtiwari-at 5064890fcf
BUGFIX: handle tool message type when converting to string (#13626)
**Description:** Currently, if we pass in a ToolMessage back to the
chain, it crashes with error

`Got unsupported message type: `

This fixes it. 

Tested locally

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Josep Pon Farreny 143049c90f
Added partial_variables to BaseStringMessagePromptTemplate.from_template(...) (#13645)
**Description:** BaseStringMessagePromptTemplate.from_template was
passing the value of partial_variables into cls(...) via **kwargs,
rather than passing it to PromptTemplate.from_template. Which resulted
in those *partial_variables being* lost and becoming required
*input_variables*.

Co-authored-by: Josep Pon Farreny <josep.pon-farreny@siemens.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Erick Friis c5ae9f832d
INFRA: Lint for imports (#13632)
- Adds pydantic/import linting to core
- Adds a check for `langchain_experimental` imports to langchain
10 months ago
Erick Friis 131db4ba68
BUGFIX: anthropic models on bedrock (#13629)
Introduced in #13403
10 months ago
David Ruan 04bddbaba4
BUGFIX: Update bedrock.py to fix provider bug (#13646)
Provider check was incorrectly failing for anything other than "meta"
10 months ago
Bagatur dc53523837
IMPROVEMENT: bump core dep 0.0.3 (#13690) 10 months ago
Bagatur a208abe6b7
add callback import test (#13689) 10 months ago
Bagatur 083afba697
BUG: Add core utils imports (#13688) 10 months ago
Bagatur c61e30632e
BUG: more core fixes (#13665)
Fix some circular deps:
- move PromptValue into top level module bc both PromptTemplates and
OutputParsers import
- move tracer context vars to `tracers.context` and import them in
functions in `callbacks.manager`
- add core import tests
10 months ago
William FH 59df16ab92
Update name (#13676) 10 months ago
Erick Friis bfb980b968
CLI 0.0.19 (#13677) 10 months ago
jakerachleff 249c796785
update langserve to v0.0.30 (#13673)
Upgrade langserve template version to 0.0.30 to include new improvements
10 months ago
jakerachleff c6937a2eb4
fix templates dockerfile (#13672)
- **Description:** We need to update the Dockerfile for templates to
also copy your README.md. This is because poetry requires that a readme
exists if it is specified in the pyproject.toml
10 months ago
Bagatur 11614700a4
bump 0.0.339rc0 (#13664) 10 months ago
Bagatur d32e511826
REFACTOR: Refactor langchain_core (#13627)
Changes:
- remove langchain_core/schema since no clear distinction b/n schema and
non-schema modules
- make every module that doesn't end in -y plural
- where easy have 1-2 classes per file
- no more than one level of nesting in directories
- only import from top level core modules in langchain
10 months ago
William FH 17c6551c18
Add error rate (#13568)
To the in-memory outputs. Separate it out from the outputs so it's
present in the dataframe.describe() results
10 months ago
Nuno Campos 8329f81072
Use pytest asyncio auto mode (#13643)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
Bagatur 99b4f46cbe
REFACTOR: Add core as dep (#13623) 10 months ago
Harrison Chase d82cbf5e76
Separate out langchain_core package (#13577)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Bagatur e620347a83
RELEASE: bump 339 (#13613) 10 months ago
Ofer Mendelevitch 52e23e50b1
BUG: Fix search_kwargs in Vectara retriever (#13299)
- **Description:** fix a bug that prevented as_retriever() in Vectara to
use the desired input arguments
  - **Issue:** as_retriever did not pass the arguments properly
  - **Tag maintainer:** @baskaryan
  - **Twitter handle:** @ofermend
10 months ago
Holt Skinner 1c08dbfb33
IMPROVEMENT: Reduce post-processing time for `DocAIParser` (#13210)
- Remove `WrappedDocument` introduced in
https://github.com/langchain-ai/langchain/pull/11413
- https://github.com/googleapis/python-documentai-toolbox/issues/198 in
Document AI Toolbox to improve initialization time for `WrappedDocument`
object.

@lkuligin

@baskaryan

@hwchase17
10 months ago
Leonid Kuligin f3fcdea574
fixed an UnboundLocalError when no documents are found (#12995)
Replace this entire comment with:
  - **Description:** fixed a bug
  - **Issue:** the issue # #12780
10 months ago
Stijn Tratsaert b6f70d776b
VertexAI LLM count_tokens method requires list of prompts (#13451)
I encountered this during summarization with VertexAI. I was receiving
an INVALID_ARGUMENT error, as it was trying to send a list of about
17000 single characters.

The [count_tokens
method](https://github.com/googleapis/python-aiplatform/blob/main/vertexai/language_models/_language_models.py#L658)
made available by Google takes in a list of prompts. It does not fail
for small texts, but it does for longer documents because the argument
list will be exceeding Googles allowed limit. Enforcing the list type
makes it work successfully.

This change will cast the input text to count to a list of that single
text so that the input format is always correct.

[Twitter](https://www.x.com/stijn_tratsaert)
10 months ago
Wang Wei fe7b40cb2a
feat: add ERNIE-Bot-4 Function Calling (#13320)
- **Description:** ERNIE-Bot-Chat-4 Large Language Model adds the
ability of `Function Calling` by passing parameters through the
`functions` parameter in the request. To simplify function calling for
ERNIE-Bot-Chat-4, the `create_ernie_fn_chain()` function has been added.
The definition and usage of the `create_ernie_fn_chain()` function is
similar to that of the `create_openai_fn_chain()` function.

Examples as the follows:

```
import json

from langchain.chains.ernie_functions import (
    create_ernie_fn_chain,
)
from langchain.chat_models import ErnieBotChat
from langchain.prompts import ChatPromptTemplate

def get_current_news(location: str) -> str:
    """Get the current news based on the location.'

    Args:
        location (str): The location to query.
    
    Returs:
        str: Current news based on the location.
    """

    news_info = {
        "location": location,
        "news": [
            "I have a Book.",
            "It's a nice day, today."
        ]
    }

    return json.dumps(news_info)

def get_current_weather(location: str, unit: str="celsius") -> str:
    """Get the current weather in a given location

    Args:
        location (str): location of the weather.
        unit (str): unit of the tempuature.
    
    Returns:
        str: weather in the given location.
    """

    weather_info = {
        "location": location,
        "temperature": "27",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

llm = ErnieBotChat(model_name="ERNIE-Bot-4")
prompt = ChatPromptTemplate.from_messages(
    [
        ("human", "{query}"),
    ]
)

chain = create_ernie_fn_chain([get_current_weather, get_current_news], llm, prompt, verbose=True)
res = chain.run("北京今天的新闻是什么?")
print(res)
```

The running results of the above program are shown below:
```
> Entering new LLMChain chain...
Prompt after formatting:
Human: 北京今天的新闻是什么?



> Finished chain.
{'name': 'get_current_news', 'thoughts': '用户想要知道北京今天的新闻。我可以使用get_current_news工具来获取这些信息。', 'arguments': {'location': '北京'}}
```
10 months ago
Adilkhan Sarsen 10418ab0c1
DeepLake Backwards compatibility fix (#13388)
- **Description:** during search with DeepLake some people are facing
backwards compatibility issues, this PR fixes it by making search
accessible for the older datasets

---------

Co-authored-by: adolkhan <adilkhan.sarsen@alumni.nu.edu.kz>
10 months ago
Tyler Hutcherson 190952fe76
IMPROVEMENT: Minor redis improvements (#13381)
- **Description:**
- Fixes a `key_prefix` bug where passing it in on
`Redis.from_existing(...)` did not work properly. Updates doc strings
accordingly.
- Updates Redis filter classes logic with best practices on typing,
string formatting, and handling "empty" filters.
- Fixes a bug that would prevent multiple tag filters from being applied
together in some scenarios.
- Added a whole new filter unit testing module. Also updated code
formatting for a number of modules that were failing the `make`
commands.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:** @tchutch94
10 months ago
Sergey Kozlov df03267edf
Fix tool arguments formatting in StructuredChatAgent (#10480)
In the `FORMAT_INSTRUCTIONS` template, 4 curly braces (escaping) are
used to get single curly brace after formatting:

```
"{{{ ... }}}}" -> format_instructions.format() ->  "{{ ... }}" -> template.format() -> "{ ... }".
```

Tool's `args_schema` string contains single braces `{ ... }`, and is
also transformed to `{{{{ ... }}}}` form. But this is not really correct
since there is only one `format()` call:

```
"{{{{ ... }}}}" -> template.format() -> "{{ ... }}".
```

As a result we get double curly braces in the prompt:
````
Respond to the human as helpfully and accurately as possible. You have access to the following tools:

foo: Test tool FOO, args: {{'tool_input': {{'type': 'string'}}}}    # <--- !!!
...
Provide only ONE action per $JSON_BLOB, as shown:

```
{
  "action": $TOOL_NAME,
  "action_input": $INPUT
}
```
````

This PR fixes curly braces escaping in the `args_schema` to have single
braces in the final prompt:
````
Respond to the human as helpfully and accurately as possible. You have access to the following tools:

foo: Test tool FOO, args: {'tool_input': {'type': 'string'}}    # <--- !!!
...
Provide only ONE action per $JSON_BLOB, as shown:

```
{
  "action": $TOOL_NAME,
  "action_input": $INPUT
}
```
````

---------

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
10 months ago
Wouter Durnez ef7802b325
Add llama2-13b-chat-v1 support to `chat_models.BedrockChat` (#13403)
Hi 👋 We are working with Llama2 on Bedrock, and would like to add it to
Langchain. We saw a [pull
request](https://github.com/langchain-ai/langchain/pull/13322) to add it
to the `llm.Bedrock` class, but since it concerns a chat model, we would
like to add it to `BedrockChat` as well.

- **Description:** Add support for Llama2 to `BedrockChat` in
`chat_models`
- **Issue:** the issue # it fixes (if applicable)
[#13316](https://github.com/langchain-ai/langchain/issues/13316)
  - **Dependencies:** any dependencies required for this change `None`
  - **Tag maintainer:** /
  - **Twitter handle:** `@SimonBockaert @WouterDurnez`

---------

Co-authored-by: wouter.durnez <wouter.durnez@showpad.com>
Co-authored-by: Simon Bockaert <simon.bockaert@showpad.com>
10 months ago
jwbeck97 a93616e972
FEAT: Add azure cognitive health tool (#13448)
- **Description:** This change adds an agent to the Azure Cognitive
Services toolkit for identifying healthcare entities
  - **Dependencies:** azure-ai-textanalytics (Optional)

---------

Co-authored-by: James Beck <James.Beck@sa.gov.au>
Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Massimiliano Pronesti 6bf9b2cb51
BUG: Limit Azure OpenAI embeddings chunk size (#13425)
Hi! 
This short PR aims at:
* Fixing `OpenAIEmbeddings`' check on `chunk_size` when used with Azure
OpenAI (thus with openai < 1.0). Azure OpenAI embeddings support at most
16 chunks per batch, I believe we are supposed to take the min between
the passed value/default value and 16, not the max - which, I suppose,
was introduced by accident while refactoring the previous version of
this check from this other PR of mine: #10707
* Porting this fix to the newest class (`AzureOpenAIEmbeddings`) for
openai >= 1.0

This fixes #13539 (closed but the issue persists).  

@baskaryan @hwchase17
10 months ago
Zeyang Lin e53f59f01a
DOCS: doc-string - langchain.vectorstores.dashvector.DashVector (#13502)
- **Description:** There are several mistakes in the sample code in the
doc-string of `DashVector` class, and this pull request aims to correct
them.
The correction code has been tested against latest version (at the time
of creation of this pull request) of: `langchain==0.0.336`
`dashvector==1.0.6` .
- **Issue:** No issue is created for this.
- **Dependencies:** No dependency is required for this change,
<!-- - **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below), -->
- **Twitter handle:** `zeyanglin`

<!-- Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
10 months ago
John Mai 16f7912e1b
BUG: fix hunyuan appid type (#13496)
- **Description: fix hunyuan appid type
- **Issue:
https://github.com/langchain-ai/langchain/pull/12022#issuecomment-1815627855
10 months ago
Nicolò Boschi 8362bd729b
AstraDB: use includeSimilarity option instead of $similarity (#13512)
- **Description:** AstraDB is going to deprecate the `$similarity`
projection property in favor of the ´includeSimilarity´ option flag. I
moved all the queries to the new format.
- **Tag maintainer:** @hemidactylus 
- **Twitter handle:** nicoloboschi
10 months ago
shumpei 7100d586ef
Introduce search_kwargs for Custom Parameters in BingSearchAPIWrapper (#13525)
Added a `search_kwargs` field to BingSearchAPIWrapper in
`bing_search.py,` enabling users to include extra keyword arguments in
Bing search queries. This update, like specifying language preferences,
adds more customization to searches. The `search_kwargs` seamlessly
merge with standard parameters in `_bing_search_results` method.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Nicolò Boschi ad0c3b9479
Fix Astra integration tests (#13520)
- **Description:** Fix Astra integration tests that are failing. The
`delete` always return True as the deletion is successful if no errors
are thrown. I aligned the test to verify this behaviour
  - **Tag maintainer:** @hemidactylus 
  - **Twitter handle:** nicoloboschi

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
umair mehmood 69d39e2173
fix: VLLMOpenAI -- create() got an unexpected keyword argument 'api_key' (#13517)
The issue was accuring because of `openai` update in Completions. its
not accepting `api_key` and 'api_base' args.

The fix is we check for the openai version and if ats v1 then remove
these keys from args before passing them to `Compilation.create(...)`
when sending from `VLLMOpenAI`

Fixed: #13507 

@eyu
@efriis 
@hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Manuel Alemán Cueto 6bc08266e0
Fix for oracle schema parsing stated on the issue #7928 (#13545)
- **Description:** In this pull request, we address an issue related to
assigning a schema to the SQLDatabase class when utilizing an Oracle
database. The current implementation encounters a bug where, upon
attempting to execute a query, the alter session parse is not
appropriately defined for Oracle, leading to an error,
  - **Issue:** #7928,
  - **Dependencies:** No dependencies,
  - **Tag maintainer:** @baskaryan,

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Andrew Teeter 325bdac673
feat: load all namespaces (#13549)
- **Description:** This change allows for the `MWDumpLoader` to load all
namespaces including custom by default instead of only loading the
[default
namespaces](https://www.mediawiki.org/wiki/Help:Namespaces#Localisation).
  - **Tag maintainer:** @hwchase17
10 months ago
Taranjeet Singh 47451764a7
Add embedchain retriever (#13553)
**Description:**

This commit adds embedchain retriever along with tests and docs.
Embedchain is a RAG framework to create data pipelines.

**Twitter handle:**
- [Taranjeet's twitter](https://twitter.com/taranjeetio) and
[Embedchain's twitter](https://twitter.com/embedchain)

**Reviewer**
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
rafly lesmana 420a17542d
fix: Make YoutubeLoader support on demand language translation (#13583)
**Description:**
Enhance the functionality of YoutubeLoader to enable the translation of
available transcripts by refining the existing logic.

**Issue:**
Encountering a problem with YoutubeLoader (#13523) where the translation
feature is not functioning as expected.

Tag maintainers/contributors who might be interested:
@eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
Bagatur 78a1f4b264
bump 338, exp 42 (#13564) 10 months ago
Harrison Chase f4c0e3cc15
move streaming stdout (#13559) 10 months ago
Leonid Ganeline 43dad6cb91
BUG fixed `openai_assistant` namespace (#13543)
BUG: langchain.agents.openai_assistant has a reference as
`from langchain_experimental.openai_assistant.base import
OpenAIAssistantRunnable`
should be 
`from langchain.agents.openai_assistant.base import
OpenAIAssistantRunnable`

This prevents building of the API Reference docs
10 months ago
Bassem Yacoube ff382b7b1b
IMPROVEMENT Adds support for new OctoAI endpoints (#13521)
small fix to add support for new OctoAI LLM endpoints
10 months ago
William FH cac849ae86
Use random seed (#13544)
For default eval llm
10 months ago
Martin Krasser 79ed66f870
EXPERIMENTAL Generic LLM wrapper to support chat model interface with configurable chat prompt format (#8295)
## Update 2023-09-08

This PR now supports further models in addition to Lllama-2 chat models.
See [this comment](#issuecomment-1668988543) for further details. The
title of this PR has been updated accordingly.

## Original PR description

This PR adds a generic `Llama2Chat` model, a wrapper for LLMs able to
serve Llama-2 chat models (like `LlamaCPP`,
`HuggingFaceTextGenInference`, ...). It implements `BaseChatModel`,
converts a list of chat messages into the [required Llama-2 chat prompt
format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and
forwards the formatted prompt as `str` to the wrapped `LLM`. Usage
example:

```python
# uses a locally hosted Llama2 chat model
llm = HuggingFaceTextGenInference(
    inference_server_url="http://127.0.0.1:8080/",
    max_new_tokens=512,
    top_k=50,
    temperature=0.1,
    repetition_penalty=1.03,
)

# Wrap llm to support Llama2 chat prompt format.
# Resulting model is a chat model
model = Llama2Chat(llm=llm)

messages = [
    SystemMessage(content="You are a helpful assistant."),
    MessagesPlaceholder(variable_name="chat_history"),
    HumanMessagePromptTemplate.from_template("{text}"),
]

prompt = ChatPromptTemplate.from_messages(messages)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = LLMChain(llm=model, prompt=prompt, memory=memory)

# use chat model in a conversation
# ...
```

Also part of this PR are tests and a demo notebook.

- Tag maintainer: @hwchase17
- Twitter handle: `@mrt1nz`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
William FH c56faa6ef1
Add execution time (#13542)
And warn instead of raising an error, since the chain API is too
inconsistent.
10 months ago
pedro-inf-custodio 0fb5f857f9
IMPROVEMENT WebResearchRetriever error handling in urls with connection error (#13401)
- **Description:** Added a method `fetch_valid_documents` to
`WebResearchRetriever` class that will test the connection for every url
in `new_urls` and remove those that raise a `ConnectionError`.
- **Issue:** [Previous
PR](https://github.com/langchain-ai/langchain/pull/13353),
  - **Dependencies:** None,
  - **Tag maintainer:** @efriis 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
10 months ago
Piyush Jain d2335d0114
IMPROVEMENT Neptune graph updates (#13491)
## Description
This PR adds an option to allow unsigned requests to the Neptune
database when using the `NeptuneGraph` class.

```python
graph = NeptuneGraph(
    host='<my-cluster>',
    port=8182,
    sign=False
)
```

Also, added is an option in the `NeptuneOpenCypherQAChain` to provide
additional domain instructions to the graph query generation prompt.
This will be injected in the prompt as-is, so you should include any
provider specific tags, for example `<instructions>` or `<INSTR>`.

```python
chain = NeptuneOpenCypherQAChain.from_llm(
    llm=llm,
    graph=graph,
    extra_instructions="""
    Follow these instructions to build the query:
    1. Countries contain airports, not the other way around
    2. Use the airport code for identifying airports
    """
)
```
10 months ago
William FH 5a28dc3210
Override Keys Option (#13537)
Should be able to override the global key if you want to evaluate
different outputs in a single run
10 months ago
Bagatur e584b28c54
bump 337 (#13534) 10 months ago
Bagatur 2e2114d2d0
FEATURE: Runnable with message history (#13418)
Add RunnableWithMessageHistory class that can wrap certain runnables and manages chat history for them.
10 months ago
Bagatur 0fc3af8932
IMPROVEMENT: update assistants output and doc (#13480) 10 months ago
Hugues Chocart 35e04f204b
[LLMonitorCallbackHandler] Various improvements (#13151)
Small improvements for the llmonitor callback handler, like better
support for non-openai models.


---------

Co-authored-by: vincelwt <vince@lyser.io>
10 months ago
Noah Stapp c1b041c188
Add Wrapping Library Metadata to MongoDB vector store (#13084)
**Description**
MongoDB drivers are used in various flavors and languages. Making sure
we exercise our due diligence in identifying the "origin" of the library
calls makes it best to understand how our Atlas servers get accessed.
10 months ago
Guy Korland 7f8fd70ac4
Add optional arguments to FalkorDBGraph constructor (#13459)
**Description:** Add optional arguments to FalkorDBGraph constructor
**Tag maintainer:** baskaryan 
**Twitter handle:** @g_korland
10 months ago
chris stucchio d7f014cd89
Bug: OpenAIFunctionsAgentOutputParser doesn't handle functions with no args (#13467)
**Description/Issue:** 
When OpenAI calls a function with no args, the args are `""` rather than
`"{}"`. Then `json.loads("")` blows up. This PR handles it correctly.

**Dependencies:** None
10 months ago
Yujie Qian 41a433fa33
IMPROVEMENT: add input_type to VoyageEmbeddings (#13488)
- **Description:** add input_type to VoyageEmbeddings
10 months ago
David Duong ea6e017b85
Add serialisation arguments to Bedrock and ChatBedrock (#13465) 10 months ago
Erick Friis 427331d621
IMPROVEMENT Lock pydantic v1 in app template, cli 0.0.18 (#13485) 10 months ago
Erick Friis 75363f048f
BUG Fix app_name in cli app new (#13482) 10 months ago
ifduyue 324ab382ad
Use List instead of list (#13443)
Unify List usages in libs/langchain/langchain/text_splitter.py, only one
place it's `list`, all other ocurrences are `List`
10 months ago
Stefano Lottini b029d9f4e6
Astra DB: minor improvements to docstrings and demo notebook (#13449)
This PR brings a few minor improvements to the docs, namely class/method
docstrings and the demo notebook.

- A note on how to control concurrency levels to tune performance in
bulk inserts, both in the class docstring and the demo notebook;
- Slightly increased concurrency defaults after careful experimentation
(still on the conservative side even for clients running on
less-than-typical network/hardware specs)
- renamed the DB token variable to the standardized
`ASTRA_DB_APPLICATION_TOKEN` name (used elsewhere, e.g. in the Astra DB
docs)
- added a note and a reference (add_text docstring, demo notebook) on
allowed metadata field names.

Thank you!
10 months ago
Eugene Yurtsev 1e43fd6afe
Add ahandle_event to _all_ (#13469)
Add ahandle_event for backwards compatibility as it is used by langserve
10 months ago
Harrison Chase f90249305a
callback refactor (#13372)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
10 months ago
Bagatur a9b2c943e6
bump 336, exp 44 (#13420) 10 months ago
Bagatur 1372296dc8
FIX: Infer runnable agent single or multi action (#13412) 10 months ago
Eugene Yurtsev accadccf8e
Use secretstr for api keys for javelin-ai-gateway (#13417)
- Make javelin_ai_gateway_api_key a SecretStr

---------

Co-authored-by: Hiroshi Tashiro <hiroshitash@gmail.com>
10 months ago
William FH ba501b27a0
Fix Runnable Lambda Afunc Repr (#13413)
Otherwise, you get an error when using async functions.


h/t to Chris Ruppelt
10 months ago
Sumukh Sridhara 1726d5dcdd
Merge pull request #13232
* PGVector needs to close its connection if its garbage collected
10 months ago
Nuno Campos 85a77d2c27
IMPROVEMENT Passthrough kwargs in runnable lambda (#13405)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
Bagatur 76c317ed78
DOCS: update rag use case (#13319) 10 months ago
Clay Elmore 8823e3831f
FEAT Bedrock cohere embedding support (#13366)
- **Description:** adding cohere embedding support to bedrock embedding
class
  - **Issue:** N/A
  - **Dependencies:** None
  - **Tag maintainer:** @3coins 
  - **Twitter handle:** celmore25

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Nuno Campos d5aeff706a
Make it easier to subclass RunnableEach (#13346)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
10 months ago
竹内謙太 3b5e8bacfa
FEAT Add some properties to NotionDBLoader (#13358)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

fix #13356

Add supports following properties for metadata to NotionDBLoader.

- `checkbox`
- `email`
- `number`
- `select`

There are no relevant tests for this code to be updated.
10 months ago
Fielding Johnston 37eb44c591
BUG Add limit_to_domains to APIChain based tools (#13367)
- **Description:** Adds `limit_to_domains` param to the APIChain based
tools (open_meteo, TMDB, podcast_docs, and news_api)
- **Issue:** I didn't open an issue, but after upgrading to 0.0.328
using these tools would throw an error.
  - **Dependencies:** N/A
  - **Tag maintainer:** @baskaryan 
  
  
**Note**: I included the trailing / simply because the docs here did
fc886cc303/docs/docs/use_cases/apis.ipynb (L246)
, but I checked the code and it is using `urlparse`. SoI followed the
docs since it comes down to stylee.
10 months ago
Bagatur 38180ad25f
bump openai support (#13262) 10 months ago
Erick Friis 7c3066f9ec
more cli interactivity, bugfix (#13360) 10 months ago
Predrag Gruevski d63d4994c0
Bump all libraries to the latest `ruff` version. (#13350)
This version of `ruff` is the one we'll be using to lint the docs and
cookbooks (#12677), so I'm making it used everywhere else too.
10 months ago
Massimiliano Pronesti 344cab0739
IMPROVEMENT: support Openai API v1 for Azure OpenAI completions (#13231)
Hi,
this PR adds support for OpenAI API v1 for Azure OpenAI completion API.
@baskaryan @hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
10 months ago
dependabot[bot] fc886cc303
Bump pyarrow from 13.0.0 to 14.0.1 in /libs/langchain (#13363)
Bumps [pyarrow](https://github.com/apache/arrow) from 13.0.0 to 14.0.1.
<details>
<summary>Commits</summary>
<ul>
<li><a
href="ba53748361"><code>ba53748</code></a>
MINOR: [Release] Update versions for 14.0.1</li>
<li><a
href="529f3768fa"><code>529f376</code></a>
MINOR: [Release] Update .deb/.rpm changelogs for 14.0.1</li>
<li><a
href="b84bbcac64"><code>b84bbca</code></a>
MINOR: [Release] Update CHANGELOG.md for 14.0.1</li>
<li><a
href="f141709763"><code>f141709</code></a>
<a
href="https://redirect.github.com/apache/arrow/issues/38607">GH-38607</a>:
[Python] Disable PyExtensionType autoload (<a
href="https://redirect.github.com/apache/arrow/issues/38608">#38608</a>)</li>
<li><a
href="5a37e74198"><code>5a37e74</code></a>
<a
href="https://redirect.github.com/apache/arrow/issues/38431">GH-38431</a>:
[Python][CI] Update fs.type_name checks for s3fs tests (<a
href="https://redirect.github.com/apache/arrow/issues/38455">#38455</a>)</li>
<li><a
href="2dcee3f82c"><code>2dcee3f</code></a>
MINOR: [Release] Update versions for 14.0.0</li>
<li><a
href="297428cbf2"><code>297428c</code></a>
MINOR: [Release] Update .deb/.rpm changelogs for 14.0.0</li>
<li><a
href="3e9734f883"><code>3e9734f</code></a>
MINOR: [Release] Update CHANGELOG.md for 14.0.0</li>
<li><a
href="9f90995c8c"><code>9f90995</code></a>
<a
href="https://redirect.github.com/apache/arrow/issues/38332">GH-38332</a>:
[CI][Release] Resolve symlinks in RAT lint (<a
href="https://redirect.github.com/apache/arrow/issues/38337">#38337</a>)</li>
<li><a
href="bd61239a32"><code>bd61239</code></a>
<a
href="https://redirect.github.com/apache/arrow/issues/35531">GH-35531</a>:
[Python] C Data Interface PyCapsule Protocol (<a
href="https://redirect.github.com/apache/arrow/issues/37797">#37797</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/apache/arrow/compare/go/v13.0.0...go/v14.0.1">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=pyarrow&package-manager=pip&previous-version=13.0.0&new-version=14.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/langchain-ai/langchain/network/alerts).

</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Predrag Gruevski <2348618+obi1kenobi@users.noreply.github.com>
10 months ago
Erick Friis c0e6045c0b
cli 0.0.17 (#13359) 10 months ago
Erick Friis 927824b7cb
CLI interactivity (#13148)
Will implement more later
10 months ago
billytrend-cohere 2f6fe6ddf3
Fix latest message index (#13355)
There is a bug which caused the earliest message rather than the latest
message being sent
10 months ago
Harrison Chase be854225c7
add more reasonable arxiv retriever (#13327) 10 months ago
Krish Dholakia 5a920e14c0
fix litellm openai imports (#13307) 10 months ago
Bagatur 1c67db4c18
Move OAI assistants to langchain and add callbacks (#13236) 10 months ago
Erick Friis 280ecfd8eb
IMPROVEMENT redirect root to docs in langserve app template (#13303) 10 months ago
mertkayhan 9b4974871d
IMPROVEMENT Increase flexibility of ElasticVectorSearch (#6863)
Hey @rlancemartin, @eyurtsev ,

I did some minimal changes to the `ElasticVectorSearch` client so that
it plays better with existing ES indices.

Main changes are as follows:

1. You can pass the dense vector field name into `_default_script_query`
2. You can pass a custom script query implementation and the respective
parameters to `similarity_search_with_score`
3. You can pass functions for building page content and metadata for the
resulting `Document`

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  4. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @dev2049
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @dev2049
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @vowelparrot
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
10 months ago
Erick Friis 50a5c919f0
IMPROVEMENT self-query template (#13305)
- [ ]
https://github.com/langchain-ai/langchain/pull/12694#discussion_r1391334719
-> keep date
- [x]
https://github.com/langchain-ai/langchain/pull/12694#discussion_r1391336586
10 months ago
Yasin b46f88d364
IMPROVEMENT add license file to subproject (#8403)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

hi!
This is pretty straight-forward: The sdist package does not contain the
license file (which is needed by e.g. conda) because the package is
built from the subdir and can't see the license.
I _copied_ the license but since I'm unfamiliar with the projects
direction, I'm not sure that's correct.
thanks!

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Rui Ramos ff19a62afc
Fix Pinecone cosine relevance score (#8920)
Fixes: #8207

Description:
Pinecone returns scores (not distances) with cosine similarity. The
values according to the docs are [-1, 1], although I could never
reproduce negative values.

This PR ensures that the score returned from Pinecone is preserved,
rather than inverted, so the most relevant documents can be filtered (eg
when using similarity thresholds)

I'll leave this as a draft PR as I couldn't run the tests (my pinecone
account might not be enough - some errors were being thrown around
namespaces) so hopefully someone who _can_ will pick this up.

Maintainers:
@rlancemartin, @eyurtsev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Bagatur 2e42ed5de6
Self-query template (#12694)
Co-authored-by: Erick Friis <erick@langchain.dev>
10 months ago
Konstantin Spieß 1e43025bf5
Fix serialization issue in Matching Engine Vector Store (#13266)
- **Description:** Fixed a serialization issue in the add_texts method
of the Matching Engine Vector Store caused by a typo, leading to an
attempt to serialize the json module itself.
  - **Issue:** #12154 
  - **Dependencies:** ./.
  - **Tag maintainer:**
11 months ago
William FH 9169d77cf6
Update error message in evaluation runner (#13296) 11 months ago
takatost f22f273f93
FIX: 'from_texts' method in Weaviate with non-existent kwargs param (#11604)
Due to the possibility of external inputs including UUIDs, there may be
additional values in **kwargs, while Weaviate's `__init__` method does
not support passing extra **kwarg parameters.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Frank995 971d2b2e34
Add missing filter to max_marginal_relevance_search inner call to max_marginal_relevance_search_by_vector (#13260)
When calling max_marginal_relevance_search from PGVector the filter
param is not carried over to max_marginal_relevance_search_by_vector

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
chevalmuscle 3ad78e48e2
Use endpoint_url if provided with boto3 session for dynamodb (#11622)
- **Description:** Uses `endpoint_url` if provided with a boto3 session.
When running dynamodb locally, credentials are required even if invalid.
With this change, it will be possible to pass a boto3 session with
credentials and specify an endpoint_url

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Erick Friis 18acc22f29
Ollama pass kwargs as options instead of top (#13280)
Noticed params are really in `options` instead while reviewing #12895
11 months ago
刘 方瑞 46af56dc4f
Add MyScaleWithoutJSON which allows user to wrap columns into Document's Metadata (#13164)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
Replace this entire comment with:
- **Description:** Add MyScaleWithoutJSON which allows user to wrap
columns into Document's Metadata
  - **Tag maintainer:** @baskaryan
11 months ago
Michael Landis 2aa13f1e10
chore: bump momento dependency version and refactor search hit usage (#13111)
**Description**

Bumps the Momento dependency to the latest version and refactors the
usage of `SearchHit` in the Momento Vector Index (MVI) vector store
integration. This change is a one liner where we use the preferred
attribute `score` to read the query-document similarity instead of
`distance`. The latest versions of Momento clients will use this
attribute going forward.

**Dependencies**

Updated the Momento dependency to latest version.

**Tests**

💚 I re-ran the existing MVI integration tests
(`tests/integration_tests/vectorstores/test_momento_vector_index.py`)
and they pass.

**Review**
cc @baskaryan @eyurtsev
11 months ago
kYLe cc55d2fcee
Add OpenAI API v1 support for ChatAnyscale and fixed a bug with openai_api_key (#13237)
1. Add OpenAI API v1 support
2. Fixed a bug to call `get_secret_value` on a str value
(values["openai_api_key"])
11 months ago
Govind.S.B 9024593468
added system prompt and template fields to ollama (#13022)
**Description**
the ollama api now supports passing system prompt and template directly
instead of modifying the model file , but the ollama integration in
langchain did not have this change updated . The update just adds these
two parameters to it ( there are 2 more parameters that are pending to
be updated, I was not sure about their utility wrt to langchain )
Refer :
8713ac23a8

**Issue** : None Applicable

**Dependencies** : None Changed

**Twitter handle** : https://twitter.com/violetto96

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
langchain-infra f55f67055f
Add dockerfile template (#13240) 11 months ago
Guillem Orellana Trullols 0f31cd8b49
Remove `_get_kwarg_value` function (#13184)
`_get_kwarg_value` function is useless, one can rely on python builtin
functionalities to do the exact same thing.

- **Description:** Removed `_get_kwarg_value`. Helps with code
readability.
  - **Issue:** the issue # it fixes (if applicable),
  - **Twitter handle:** @Guillem_96
11 months ago
SuperDa Fu e1c020dfe1
dalle add model parameter (#13201)
- **Description:** dalle_image_generator adding a new model parameter,
  - **Issue:** N/A,
  - **Dependencies:** 
  - **Tag maintainer: @hwchase17
  - **Twitter handle:**

---------

Co-authored-by: dafu <xiangbingze@wenru.wang>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
11 months ago
Dennis de Greef 64e11592bb
Improve CSV reader which can't call .strip() on NoneType (#13079)
Improve CSV reader which can't call .strip() on NoneType if there are
less cells in the row compared to the header

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** 
I have a CSV file as followed

```
headerA,headerB,headerC
v1A,v1B,v1C,
v2A,v2B
v3A,v3B,v3C
```
In this case, row 2 is missing a value, which results in reading a None
type. The strip() method can not be called on None, hence raising. In
this PR I am making the change to only call strip if the value if not
None.

  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
glad4enkonm 339973db47
Update ollama.py (#12895)
duplicate option removed
**Description:**  An issue fix, http stop option duplicate removed.
**Issue:** the issue #12892 fix
**Dependencies:** no
**Tag maintainer:** @eyurtsev

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Isak Nyberg 8f81703d76
Add new models to openai callback (#13244)
**Description:** Adding the new models to the openai callback function,
info taken from [model
announcement](https://platform.openai.com/docs/models) and
[pricing](https://openai.com/pricing)

A short description for a short PR :)
11 months ago
Bagatur ea6dd3a550
bump 335 (#13261) 11 months ago
William FH a837b03e55
Update langsmith version 0.63 (#13208) 11 months ago
Harrison Chase 7f1d26160d
update tools (#13243) 11 months ago
Nuno Campos 8d6faf5665
Make it easier to subclass runnable binding with custom init args (#13189) 11 months ago
Peter Vandenabeele 7f1964b264
Fix BeautifulSoupTransformer: no more duplicates and correct order of tags + tests (#12596) 11 months ago
Erick Friis 9c7afa8adb
Upgrade cohere embedding model to v3 (#13219)
Just updates API docs, doesn't change default param from 2.0 (could be
breaking change)
11 months ago
Erick Friis 8fdf15c023
Fix Document Loader Unit Test - Docusaurus (#13228) 11 months ago
Lee 72ad448daa
feat: Docusaurus Loader (#9138)
Added a Docusaurus Loader

Issue: #6353

I had to implement this for working with the Ionic documentation, and
wanted to open this up as a draft to get some guidance on building this
out further. I wasn't sure if having it be a light extension of the
SitemapLoader was in the spirit of a proper feature for the library --
but I'm grateful for the opportunities Langchain has given me and I'd
love to build this out properly for the sake of the community.

Any feedback welcome!
11 months ago
Tomaz Bratanic 0dc4ab0be1
Neo4j chat message history (#13008) 11 months ago
fyasla d266b3ea4a
issue #12165 mask API key in chat_models/azureml_endpoint module (#12836)
- **Description:** `AzureMLChatOnlineEndpoint` object from
langchain/chat_models/azureml_endpoint.py safe to print
without having any secrets included in raw format in the string
representation.
  - **Issue:** #12165,
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Faysal Bougamale <faysal.bougamale@horiba.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Anush 52f34de9b7
feat: FastEmbed embedding provider (#13109)
## Description:
This PR intends to add
[Qdrant/FastEmbed](https://qdrant.github.io/fastembed/) as a local
embeddings provider, associated tests and documentation.

**Documentation preview:**
https://langchain-git-fork-anush008-master-langchain.vercel.app/docs/integrations/text_embedding/fastembed

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Eugene Yurtsev b0e8cbe0b3
Add RunnableSequence documentation (#13094)
Add RunnableSequence documentation
11 months ago
Eugene Yurtsev 869df62736
Document RunnableWithFallbacks (#13088)
Add documentation to RunnableWithFallbacks
11 months ago
Eugene Yurtsev 8313c218da
Add more runnable documentation (#13083)
- Adding documentation to the runnable.
- Documentation is not organized in the best way for the runnable; i.e.,
in
terms of LCEL vs. other standard methods, will follow up with more
edits.
11 months ago
Bagatur 24386e0860
bump 334, exp 40 (#13211) 11 months ago
Lance Martin d2e50b3108
Add Chroma multimodal cookbook (#12952)
Pending:
* https://github.com/chroma-core/chroma/pull/1294
* https://github.com/chroma-core/chroma/pull/1293

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
The1Bill 55912868da
Update toolkit.py to remove single quotes around table names (#12445)
**Description:** Removing the single quote wrapper around the table
names in the SQL agent toolkit.py file as it misleads the LLM into
querying against tables with single quotes around their names.
**Issue:** #7457 
**Dependencies:** None
**Tag maintainer:** @hwchase17 
**Twitter handle:** None
11 months ago
Nuno Campos 362a446999
Changes to root listener (#12174)
- Implement config_specs to include session_id
- Remove Runnable method and update notebook
- Add more details to notebook, eg. show input schema and config schema
before and after adding message history

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
11 months ago
Nuno Campos b2b94424db
Update return type for Runnable.__or__ (#12880)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Harrison Chase 0a2b1c7471
improve duck duck go tool (#13165) 11 months ago
Shinya Maeda 28cc60b347
Fix langchain.llms OpenAI completion doesn't work due to v1 client update (#13099)
This commit fixes the issue that langchain.llms OpenAI completion
stopped working since the V1 openai client update.

Replace this entire comment with:
- **Description:** This PR fixes the issue [AttributeError: module
'openai' has no attribute
'Completion'](https://github.com/langchain-ai/langchain/issues/12967)
similar to
8e0cb2eb84
and https://github.com/langchain-ai/langchain/pull/12969,
  - **Issue:** https://github.com/langchain-ai/langchain/issues/12967,
  - **Dependencies:** `openai` v1.x.x client,
  - **Tag maintainer:** @baskaryan,
  - **Twitter handle:** @dosuken123 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur ff43cd6701
OpenAI remove httpx typing (#13154)
Addresses #13124
11 months ago
Bagatur 8b2a82b5ce
Bagatur/docs smith context (#13139) 11 months ago
Bagatur f04cc4b7e1
bump 333 (#13131) 11 months ago
billytrend-cohere b346d4a455
Add message to documents (#12552)
This adds the response message as a document to the rag retriever so
users can choose to use this. Also drops document limit.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Harrison Chase 5f38770161
Support oai tool call (#13110)
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
11 months ago
Holt Skinner 0fc8fd12bd
feat: Vertex AI Search - Add Snippet Retrieval for Non-Advanced Website Data Stores (#13020)
https://cloud.google.com/generative-ai-app-builder/docs/snippets#snippets

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Jacob Lee 76283e9625
Adds embeddings filter option to return scores in state (#12489)
CC @baskaryan @assafelovic
11 months ago
jakerachleff 18601bd4c8
Get project from langchain sdk (#13100)
## Description
We need to centralize the API we use to get the project name for our
tracers. This PR makes it so we always get this from a shared function
in the langsmith sdk.

## Dependencies
Upgraded langsmith from 0.52 to 0.62 to include the new API
`get_tracer_project`
11 months ago
Bagatur 72e12f6bcf
update more azure docs (#13093) 11 months ago
Bagatur 1703f132c6
update azure embedding docs (#13091) 11 months ago
Bagatur 9fdfac22c2
bump 332 (#13089) 11 months ago
Bagatur 1f85ec34d5
bump 331rc3 exp 39 (#13086) 11 months ago
Anton Troynikov 9f077270c8
Don't pass EF to chroma (#13085)
- **Description:** 

Recently Chroma rolled out a breaking change on the way we handle
embedding functions, in order to support multi-modal collections.

This broke the way LangChain's `Chroma` objects get created, because we
were passing the EF down into the Chroma collection:
https://docs.trychroma.com/migration#migration-to-0416---november-7-2023

However, internally, we are never actually using embeddings on the
chroma collection - LangChain's `Chroma` object calls it instead. Thus
we just don't pass an `embedding_function` to Chroma itself, which fixes
the issue.
11 months ago
Erick Friis f15f8e01cf
Azure OpenAI Embeddings (#13039)
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
David Peterson 37561d8986
Add Proper Import Error (#13042)
- **Description:** The issue was not listing the proper import error for
amazon textract loader.
- **Issue:** Time wasted trying to figure out what to install...
(langchain docs don't list the dependency either)
  - **Dependencies:** N/A
  - **Tag maintainer:** @sbusso 
  - **Twitter handle:** @h9ste

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
11 months ago
Eugene Yurtsev 06c503f672
Add RunnableRetry Documentation (#13074) 11 months ago
Bagatur 55aeff6777
oai assistant multiple actions (#13068) 11 months ago
Erick Friis a9b70baef9
cli updates, 0.0.16 (#13034)
- confirm flags, serve detection
- 0.0.16
- always gen code
- pip bool
11 months ago
Erick Friis 506f81563f
Update Deps in Experimental (#13029) 11 months ago
Stefano Lottini 4f4b020582
Add "Astra DB" vector store integration (#12966)
# Astra DB Vector store integration

- **Description:** This PR adds a `VectorStore` implementation for
DataStax Astra DB using its HTTP API
  - **Issue:** (no related issue)
- **Dependencies:** A new required dependency is `astrapy` (`>=0.5.3`)
which was added to pyptoject.toml, optional, as per guidelines
- **Tag maintainer:** I recently mentioned to @baskaryan this
integration was coming
  - **Twitter handle:** `@rsprrs` if you want to mention me

This PR introduces the `AstraDB` vector store class, extensive
integration test coverage, a reworking of the documentation which
conflates Cassandra and Astra DB on a single "provider" page and a new,
completely reworked vector-store example notebook (common to the
Cassandra store, since parts of the flow is shared by the two APIs). I
also took care in ensuring docs (and redirects therein) are behaving
correctly.

All style, linting, typechecks and tests pass as far as the `AstraDB`
integration is concerned.

I could build the documentation and check it all right (but ran into
trouble with the `api_docs_build` makefile target which I could not
verify: `Error: Unable to import module
'plan_and_execute.agent_executor' with error: No module named
'langchain_experimental'` was the first of many similar errors)

Thank you for a review!
Stefano

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Yang, Bo 600caff03c
Add `Memorize` tool (#11722)
- **Description:** Add `Memorize` tool
  - **Tag maintainer:** @hwchase17

This PR added a new tool `Memorize` so that an agent can use it to
fine-tune itself. This tool requires `TrainableLLM` introduced in #11721

DEMO:
6a9003d5db

![image](https://github.com/langchain-ai/langchain/assets/601530/d6f0cb45-54df-4dcf-b143-f8aefb1e76e3)
11 months ago
Bagatur cf481c9418
bump exp 38 (#13016) 11 months ago
Bagatur 57e19989f6
Bagatur/oai assistant (#13010) 11 months ago
Erick Friis 74134dd7e1
cli pyproject updating (#12945)
`langchain app add` and `langchain app remove` will now keep the
dependencies list updated.

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
11 months ago
Bagatur 6175dc30aa
bump 331rc2 (#13006) 11 months ago
Erick Friis 0c81cd923e
oai v1 embeddings (#12969)
Initial PR to get OpenAIEmbeddings working with the new sdk

fyi @rlancemartin 

Fixes #12943

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur fdbb45d79e
bump 331rc1 (#12965) 11 months ago
Bagatur 3bb8030a6e
fix max_tokens (#12964) 11 months ago
Bagatur a9002a82b8
bump 331rc0 (#12963) 11 months ago
Harrison Chase c27400efeb
Support multimodal messages (#11320)
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 4f7dff9d66
Record system fingerprint chat openai (#12960) 11 months ago
Bagatur 8e0cb2eb84
ChatOpenAI and AzureChatOpenAI openai>=1 compatible (#12948) 11 months ago
Kacper Łukawski 52d0055a91
Add support of Cohere Embed v3 (#12940)
Cohere released the new embedding API (Embed v3:
https://txt.cohere.com/introducing-embed-v3/) that treats document and
query embeddings differently. This PR updated the `CohereEmbeddings` to
use them appropriately. It also works with the old models.
11 months ago
Praveen Venkateswaran 8e0dcb37d2
Add SecretStr for Symbl.ai Nebula API (#12896)
Description: This PR masks API key secrets for the Nebula model from
Symbl.ai
Issue: #12165 
Maintainer: @eyurtsev

---------

Co-authored-by: Praveen Venkateswaran <praveen.venkateswaran@ibm.com>
11 months ago
Vinzenz Klass 59d0bd2150
feat: acquire advisory lock before creating extension in pgvector (#12935)
- **Description:** Acquire advisory lock before attempting to create
extension on postgres server, preventing errors in concurrent
executions.
  - **Issue:** #12933
  - **Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Eugene Yurtsev b376854b26
Fix for anyscale chat model api key (#12938)
* ChatAnyscale was missing coercion to SecretStr for anyscale api key
* The model inherits from ChatOpenAI so it should not force the openai
api key to be secret str until openai model has the same changes

https://github.com/langchain-ai/langchain/issues/12841
11 months ago
hmasdev 622bf12c2e
fix regex pattern of structured output parser (#12929)
- **Description:** fix the regex pattern of
[StructuredChatOutputParser](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/structured_chat/output_parser.py#L18)
and add unit tests for the code change.
- **Issue:** #12158 #12922
- **Dependencies:** None
- **Tag maintainer:** 
- **Twitter handle:** @hmdev3
- **NOTE:** This PR conflicts #7495 . After #7495 is merged, I am going
to update PR.
11 months ago
wemysschen 8d7144e6a6
fix baiducloud directory loader import file loader (#12924)
**Issue:** 
fix baiducloud BOS directory loader imports its file loader

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
11 months ago
Kacper Łukawski 621419f71e
Fix normalizing the cosine distance in Qdrant (#12934)
Qdrant was incorrectly calculating the cosine similarity and returning
`0.0` for the best match, instead of `1.0`. Internally Qdrant returns a
cosine score from `-1.0` (worst match) to `1.0` (best match), and the
current formula reflects it.
11 months ago
Hech 8fe6bcc662
Fix return metadata when searching for DingoDB (#12937) 11 months ago
Jakub Novák ada3d2cbd1
Add possibility to pass on_artifacts for a specific conversation (#12687)
Possibility to pass on_artifacts to a conversation. It can be then
achieved by adding this way:

```python
result = agent.run(
    input=message.text,
    metadata={
        "on_artifact": CALLBACK_FUNCTION
    },
)
```
11 months ago
Bagatur 53f453f01a
bump 331 (#12932) 11 months ago
Erick Friis 5000c7308e
cli template gitignores (#12914)
- ap gitignore
- package
11 months ago
Harrison Chase aba407f774
use keys not items (#12918) 11 months ago
wemysschen e14aa37d59
fix bes vector store search (#12828)
**Issue:** 
fix search body in baidu cloud vectorsearch

---------

Co-authored-by: wemysschen <root@icoding-cwx.bcc-szzj.baidu.com>
11 months ago
Lance Martin ea1ab391d4
Open Clip multimodal embeddings (#12754) 11 months ago
Bagatur ebee616822
bump 330 (#12853) 11 months ago
Erick Friis 6c237716c4
Update readmes with new cli install (#12847)
Old command still works. Just simplifying.

Merge after releasing CLI 0.0.15
11 months ago
Erick Friis 7db49d3842
Confirm sys.path includes current dir for app serve (#12851)
- Make sure sys.path is set properly for langchain app serve
- bump
11 months ago
Erick Friis 1bc35f61cb
CLI 0.0.14, Uvicorn update and no more [serve] (#12845)
Calls uvicorn directly from cli:
Reload works if you define app by import string instead of object.
(was doing subprocess in order to get reloading)

Version bump to 0.0.14

Remove the need for [serve] for simplicity.

Readmes are updated in #12847 to avoid cluttering this PR
11 months ago
William FH 18005c6384
Disable trace_on_chain_group auto-tracing (#12807)
Previously we treated trace_on_chain_group as a command to always start
tracing. This is unintuitive (makes the function do 2 things), and makes
it harder to toggle tracing
11 months ago
Erick Friis 0da75b9ebd
Autopopulate module name in cli init (#12814) 11 months ago
William FH 98aff29fbd
Add Dataset Page to printout (#12816) 11 months ago
Manuel Rech 2e2b9c76d9
Keep also original query - multi_query.py (#12696)
When you use a MultiQuery it might be useful to use the original query
as well as the newly generated ones to maximise the changes to retriever
the correct document. I haven't created an issue, it seems a very small
and easy thing.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 658a3a8607
FEAT: Merge TileDB vecstore (#12811) 11 months ago
Akio Nishimura c04647bb4e
Correct number of elements in config list in `batch()` and `abatch()` of `BaseLLM` (#12713)
- **Description:** Correct number of elements in config list in
`batch()` and `abatch()` of `BaseLLM` in case `max_concurrency` is not
None.
- **Issue:** #12643
- **Twitter handle:** @akionux

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
James Braza 88b506b321
Adds missing `urllib.parse` for IDE warning of `PubMedAPIWrapper` (#12808)
Resolves an IDE (PyCharm 2023.2.3 PE) warning around
`urllib.parse.quote`, also enabling CTRL-click
11 months ago
Bagatur a2bb0dd445 TileDB update import unit tests 11 months ago
Nikos Papailiou 2fdaa1e5fd
Add TileDB vectorstore implementation (#12624)
- **Description:** Add [TileDB](https://tiledb.com) vectorstore
implementation. TileDB offers ANN search capabilities using the
[TileDB-Vector-Search](https://github.com/TileDB-Inc/TileDB-Vector-Search)
module. It provides serverless execution of ANN queries and storage of
vector indexes both on local disk and cloud object stores (i.e. AWS S3).
More details in:
- [Why TileDB as a Vector
Database](https://tiledb.com/blog/why-tiledb-as-a-vector-database)
- [TileDB 101: Vector
Search](https://tiledb.com/blog/tiledb-101-vector-search)
- **Twitter handle:** @tiledb
11 months ago
盐粒 Yanli 1b233798a0
feat: Supprt pgvecto.rs as a VectorStore (#12718)
Supprt [pgvecto.rs](https://github.com/tensorchord/pgvecto.rs) as a new
VectorStore type.

This introduces a new dependency
[pgvecto_rs](https://pypi.org/project/pgvecto_rs/) and upgrade
SQLAlchemy to ^2.

Relate to https://github.com/tensorchord/pgvecto.rs/issues/11

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Daniel Chalef 0cbdba6a9b
zep: VectorStore: Use Native MMR (#12690)
- refactor to use Zep's native MMR; update example
- 
@baskaryan @eyurtsev
11 months ago
Daniel Chalef cc3d3920e3
Zep: Summary Search and Example (#12686)
Zep now has the ability to search over chat history summaries. This PR
adds support for doing so. More here: https://blog.getzep.com/zep-v0-17/

@baskaryan @eyurtsev
11 months ago
Bagatur 526313002c
add import tests to all modules (#12806) 11 months ago
Harrison Chase 6609a6033f
fix vectorstore imports (#12804)
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Nuno Campos f66a9d2adf
Automatically add configurable key to config_schema if config_specs i… (#12798)
…s present

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Praveen Venkateswaran 21eeba075c
enable the device_map parameter in huggingface pipeline (#12731)
### Enabling `device_map` in HuggingFacePipeline 

For multi-gpu settings with large models, the
[accelerate](https://huggingface.co/docs/accelerate/usage_guides/big_modeling#using--accelerate)
library provides the `device_map` parameter to automatically distribute
the model across GPUs / disk.

The [Transformers
pipeline](3520e37e86/src/transformers/pipelines/__init__.py (L543))
enables users to specify `device` (or) `device_map`, and handles cases
(with warnings) when both are specified.

However, Langchain's HuggingFacePipeline only supports specifying
`device` when calling transformers which limits large models and
multi-gpu use-cases.
Additionally, the [default
value](8bd3ce59cd/libs/langchain/langchain/llms/huggingface_pipeline.py (L72))
of `device` is initialized to `-1` , which is incompatible with the
transformers pipeline when `device_map` is specified.

This PR addresses the addition of `device_map` as a parameter , and
solves the incompatibility of `device = -1` when `device_map` is also
specified.
An additional test has been added for this feature. 

Additionally, some existing tests no longer work since 
1. `max_new_tokens` has to be specified under `pipeline_kwargs` and not
`model_kwargs`
2. The GPT2 tokenizer raises a `ValueError: Pipeline with tokenizer
without pad_token cannot do batching`, since the `tokenizer.pad_token`
is `None` ([related
issue](https://github.com/huggingface/transformers/issues/19853) on the
transformers repo).

This PR handles fixing these tests as well.

Co-authored-by: Praveen Venkateswaran <praveen.venkateswaran@ibm.com>
11 months ago
Mark Bell 3276aa3e17
__getattr__ should rase AttributeError not ImportError on missing attributes (#12801)
[The python
spec](https://docs.python.org/3/reference/datamodel.html#object.__getattr__)
requires that `__getattr__` throw `AttributeError` for missing
attributes but there are several places throwing `ImportError` in the
current code base. This causes a specific problem with `hasattr` since
it calls `__getattr__` then looks only for `AttributeError` exceptions.
At present, calling `hasattr` on any of these modules will raise an
unexpected exception that most code will not handle as `hasattr`
throwing exceptions is not expected.

In our case this is triggered by an exception tracker (Airbrake) that
attempts to collect the version of all installed modules with code that
looks like: `if hasattr(mod, "__version__"):`. With `HEAD` this is
causing our exception tracker to fail on all exceptions.

I only changed instances of unknown attributes raising `ImportError` and
left instances of known attributes raising `ImportError`. It feels a
little weird but doesn't seem to break anything.
11 months ago
Illia 71d1a48b66
Use data from all Google search results in SerpApi.com wrapper (#12770)
- **Description:** Use all Google search results data in SerpApi.com
wrapper instead of the first one only
  - **Tag maintainer:** @hwchase17 

_P.S. `libs/langchain/tests/integration_tests/utilities/test_serpapi.py`
are not executed during the `make test`._
11 months ago
Nuno Campos c4fdf78d03
Fix AddableDict raising exception when used with non-addable values (#12785)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Erick Friis 49e283a0cd
CLI 0.0.13, Configurable Template Demo (#12796) 11 months ago
Nuno Campos d1c6ad7769
Fix on_llm_new_token(chunk=) for some chat models (#12784)
It was passing in message instead of generation

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Erick Friis 070823f294
CLI 0.0.12 (#12787) 11 months ago
Bagatur 979501c0ca
bump 329 (#12778) 11 months ago
Erick Friis da821320d3
Fixes 'Nonetype' not iterable for ObsidianLoader (#12751)
Implements #12726 from @Di3mex
11 months ago
Eugene Yurtsev b1caae62fd
APIChain add restrictions to domains (CVE-2023-32786) (#12747)
* Restrict the chain to specific domains by default
* This is a breaking change, but it will fail loudly upon object
instantiation -- so there should be no silent errors for users
* Resolves CVE-2023-32786
11 months ago
Erick Friis 4421ba46d7
Demo Server, Fix Timescale (#12746)
- improve demo server
- missing deps
11 months ago
Eugene Yurtsev 0e1aedb9f4
Use jinja2 sandboxing by default (#12733)
* This is an opt-in feature, so users should be aware of risks if using
jinja2.
* Regardless we'll add sandboxing by default to jinja2 templates -- this
  sandboxing is a best effort basis.
* Best strategy is still to make sure that jinja2 templates are only
loaded from trusted sources.
11 months ago
Erick Friis 14340ee7cd
use http.client instead of urllib3 (#12660)
dep problems with requests

cloudflare debugging not worth it with urllib
11 months ago
Bagatur eee5181b7a
bump 328, exp 37 (#12722) 11 months ago
Erick Friis 3405dbbc64
dash not underscore (#12716)
template names are auto-populating with the wrong convention (with
underscores)
11 months ago
123-fake-st 8bd3ce59cd
PyPDFLoader use url in metadata source if file is a web path (#12092)
**Description:** Update `langchain.document_loaders.pdf.PyPDFLoader` to
store url in metadata (instead of a temporary file path) if user
provides a web path to a pdf

- **Issue:** Related to #7034; the reporter on that issue submitted a PR
updating `PyMuPDFParser` for this behavior, but it has unresolved merge
issues as of 20 Oct 2023 #7077
- In addition to `PyPDFLoader` and `PyMuPDFParser`, these other classes
in `langchain.document_loaders.pdf` exhibit similar behavior and could
benefit from an update: `PyPDFium2Loader`, `PDFMinerLoader`,
`PDFMinerPDFasHTMLLoader`, `PDFPlumberLoader` (I'm happy to contribute
to some/all of that, including assisting with `PyMuPDFParser`, if my
work is agreeable)
- The root cause is that the underlying pdf parser classes, e.g.
`langchain.document_loaders.parsers.pdf.PyPDFParser`, never receive
information about the url; the parsers receive a
`langchain.document_loaders.blob_loaders.blob`, which contains the pdf
contents and local file path, but not the url
- This update passes the web path directly to the parser since it's
minimally invasive and doesn't require further changes to maintain
existing behavior for local files... bigger picture, I'd consider
extending `blob` so that extra information like this can be
communicated, but that has much bigger implications on the codebase
which I think warrants maintainer input

  - **Dependencies:** None

```python
# old behavior
>>> from langchain.document_loaders import PyPDFLoader
>>> loader = PyPDFLoader('https://arxiv.org/pdf/1706.03762.pdf')
>>> docs = loader.load()
>>> docs[0].metadata
{'source': '/var/folders/w2/zx77z1cs01s1thx5dhshkd58h3jtrv/T/tmpfgrorsi5/tmp.pdf', 'page': 0}

# new behavior
>>> from langchain.document_loaders import PyPDFLoader
>>> loader = PyPDFLoader('https://arxiv.org/pdf/1706.03762.pdf')
>>> docs = loader.load()
>>> docs[0].metadata
{'source': 'https://arxiv.org/pdf/1706.03762.pdf', 'page': 0}
```
11 months ago
Dave Kwon b1954aab13
feat: Add page metadata on PDFMinerLoader (#12277)
- **Description:** #12273 's suggestion PR
Like other PDFLoader, loading pdf per each page and giving page
metadata.
  - **Issue:** #12273 
  - **Twitter handle:** @blue0_0hope

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Duda Nogueira 7148f3e1fe
Weaviate - Fix schema existence check (#12711)
This will allow you create the schema beforehand. The check was failing
and preventing importing into existing classes.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Aidos Kanapyanov ae63c186af
Mask API key for Anyscale LLM (#12406)
Description: Add masking of API Key for Anyscale LLM when printed.
Issue: #12165 
Dependencies: None
Tag maintainer: @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Predrag Gruevski 5ae51a8a85
Fix typo highlighted by `ruff` autoformatter. (#12691)
H/t @MichaReiser for spotting it:
https://github.com/langchain-ai/langchain/pull/12585/files#r1378253045
11 months ago
Erick Friis 44c8b159b9
properly increment version in cli (#12685)
Went from 0.0.9 -> 0.0.11 without releasing. Back to 10, then release.
11 months ago
Leonid Ganeline ddcec005bc
fix for `YahooFinanceNewsTool` (#12665)
Added YahooFinanceNewsTool to the __init__.py 
It was missed here.
11 months ago
Predrag Gruevski 01a3c9b94e
Use an in-project virtualenv in the CLI package. (#12678)
Keeping it in sync with how our other packages are configured.
11 months ago
Jacob Lee bd668fcea1
Adds version CLI command (#12619)
Will be automatically bumped with `poetry version patch`.

@efriis @hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Frank bf5805bb32
Add quip loader (#12259)
- **Description:** implement [quip](https://quip.com) loader
  - **Issue:** https://github.com/langchain-ai/langchain/issues/10352
  - **Dependencies:** No
  -  pass make format, make lint, make test

---------

Co-authored-by: Hao Fan <h_fan@apple.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Roman Vasilyev c9a6940d58
PGVector fix (#12592)
latest release broken, this fixes it

---------

Co-authored-by: Roman Vasilyev <rvasilyev@mozilla.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Predrag Gruevski e8b99364b3
Use `ruff` for both linting and formatting in `langchain-cli`. (#12672)
Prior to this PR, `ruff` was used only for linting and not for
formatting, despite the names of the commands. This PR makes it be used
for both linting code and autoformatting it.
11 months ago
Margaret Qian acfc485808
Update MosaicML Embedding Input Key (#12657)
This input key was missed in the last update PR:
https://github.com/langchain-ai/langchain/pull/7391

The input/output formats are intended to be like this:

```
{"inputs": [<prompt>]} 

{"outputs": [<output_text>]}
```
11 months ago
Predrag Gruevski c871cc5055
Remove `print()` statements which seemed leftover from debugging. (#12648)
Added in #12159 presumably during debugging. Right now they cause a bit of visual noise.
11 months ago
Noam Gat 14e8c74736
LM Format Enforcer Integration + Sample Notebook (#12625)
## Description

This PR adds support for
[lm-format-enforcer](https://github.com/noamgat/lm-format-enforcer) to
LangChain.

![image](https://raw.githubusercontent.com/noamgat/lm-format-enforcer/main/docs/Intro.webp)

The library is similar to jsonformer / RELLM which are supported in
Langchain, but has several advantages such as
- Batching and Beam search support
- More complete JSON Schema support
- LLM has control over whitespace, improving quality
- Better runtime performance due to only calling the LLM's generate()
function once per generate() call.

The integration is loosely based on the jsonformer integration in terms
of project structure.

## Dependencies

No compile-time dependency was added, but if `lm-format-enforcer` is not
installed, a runtime error will occur if it is trying to be used.

## Tests

Due to the integration modifying the internal parameters of the
underlying huggingface transformer LLM, it is not possible to test
without building a real LM, which requires internet access. So, similar
to the jsonformer and RELLM integrations, the testing is via the
notebook.

## Twitter Handle

[@noamgat](https://twitter.com/noamgat)


Looking forward to hearing feedback!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Erick Friis 7f6e751a3d
template updates (#12646) 11 months ago
Predrag Gruevski f94e24dfd7
Install and use `ruff format` instead of black for code formatting. (#12585)
Best to review one commit at a time, since two of the commits are 100%
autogenerated changes from running `ruff format`:
- Install and use `ruff format` instead of black for code formatting.
- Output of `ruff format .` in the `langchain` package.
- Use `ruff format` in experimental package.
- Format changes in experimental package by `ruff format`.
- Manual formatting fixes to make `ruff .` pass.
11 months ago
William FH bfd719f9d8
bind_functions convenience method (#12518)
I always take 20-30 seconds to re-discover where the
`convert_to_openai_function` wrapper lives in our codebase. Chat
langchain [has no
clue](https://smith.langchain.com/public/3989d687-18c7-4108-958e-96e88803da86/r)
what to do either. There's the older `create_openai_fn_chain` , but we
haven't been recommending it in LCEL. The example we show in the
[cookbook](https://python.langchain.com/docs/expression_language/how_to/binding#attaching-openai-functions)
is really verbose.


General function calling should be as simple as possible to do, so this
seems a bit more ergonomic to me (feel free to disagree). Another option
would be to directly coerce directly in the class's init (or when
calling invoke), if provided. I'm not 100% set against that. That
approach may be too easy but not simple. This PR feels like a decent
compromise between simple and easy.

```
from enum import Enum
from typing import Optional

from pydantic import BaseModel, Field


class Category(str, Enum):
    """The category of the issue."""

    bug = "bug"
    nit = "nit"
    improvement = "improvement"
    other = "other"


class IssueClassification(BaseModel):
    """Classify an issue."""

    category: Category
    other_description: Optional[str] = Field(
        description="If classified as 'other', the suggested other category"
    )
    

from langchain.chat_models import ChatOpenAI

llm = ChatOpenAI().bind_functions([IssueClassification])
llm.invoke("This PR adds a convenience wrapper to the bind argument")

# AIMessage(content='', additional_kwargs={'function_call': {'name': 'IssueClassification', 'arguments': '{\n  "category": "improvement"\n}'}})
```
11 months ago
Nuno Campos 3143324984
Improve Runnable type inference for input_schemas (#12630)
- Prefer lambda type annotations over inferred dict schema
- For sequences that start with RunnableAssign infer seq input type as
"input type of 2nd item in sequence - output type of runnable assign"
11 months ago
Nuno Campos 2f563cee20
Add Runnable.with_listeners() (#12549)
- This binds start/end/error listeners to a runnable, which will be
called with the Run object
11 months ago
Bagatur bcc62d63be
bump 327 (#12623) 11 months ago
Erick Friis a1fae1fddd
Readme rewrite (#12615)
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
11 months ago
Yujie Qian 1dbb77d7db
VoyageEmbeddings (#12608)
- **Description:** Integrate VoyageEmbeddings into LangChain, with tests
and docs
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** N/A
  - **Twitter handle:** @Voyage_AI_

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
chocolate4 92bf40a921
Add a new vector store hippo for langchain #11763 (#12412)
#11763

---------

Co-authored-by: TranswarpHippo <hippo.0.assistant@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Karthik Raja A 342d6c7ab6
Multi on client toolkit (#12392)
Replace this entire comment with:
-Add MultiOn close function and update key value and add async
functionality
- solved the key value TabId not found.. (updated to use latest key
value)
  
@hwchase17
11 months ago
Prabin Nepal b109cb031b
SecretStr for fireworks api (#12475)
- **Description:** This pull request removes secrets present in raw
format,
- **Issue:** Fireworks api key was exposed when printing out the
langchain object
[#12165](https://github.com/langchain-ai/langchain/issues/12165)
 - **Maintainer:** @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Harrison Chase a32c236c64
bump cli to 009 (#12611) 11 months ago
Martin Schade 0c7f1d8b21
Textract linearizer (#12446)
**Description:** Textract PDF Loader generating linearized output,
meaning it will replicate the structure of the source document as close
as possible based on the features passed into the call (e. g. LAYOUT,
FORMS, TABLES). With LAYOUT reading order for multi-column documents or
identification of lists and figures is supported and with TABLES it will
generate the table structure as well. FORMS will indicate "key: value"
with columms.
  - **Issue:** the issue fixes #12068 
- **Dependencies:** amazon-textract-textractor is added, which provides
the linearization
  - **Tag maintainer:** @3coins 

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Erick Friis f39246bd7e
cli should pull instead of delete+clone (#12607)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
11 months ago
Harrison Chase 8b5e879171
add a template for the package readme (#12499)
Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Bagatur 9bedda50f2
Bagatur/lakefs loader2 (#12524)
Co-authored-by: Jonathan Rosenberg <96974219+Jonathan-Rosenberg@users.noreply.github.com>
11 months ago
Ackermann Yuriy 99b69fe607
Fixed missing optional tags. Added default key value for Ollama (#12599)
Added missing Optional typings. Added default values for Ollama optional
keys.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 016813d189
factor out to_secret (#12593) 11 months ago
hsuyuming 630ae24b28
implement get_num_tokens to use google's count_tokens function (#10565)
can get the correct token count instead of using gpt-2 model

**Description:** 
Implement get_num_tokens within VertexLLM to use google's count_tokens
function.
(https://cloud.google.com/vertex-ai/docs/generative-ai/get-token-count).
So we don't need to download gpt-2 model from huggingface, also when we
do the mapreduce chain we can get correct token count.

**Tag maintainer:** 
@lkuligin 
**Twitter handle:** 
My twitter: @abehsu1992626

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Pham Vu Thai Minh 33e77a1007
Async support for FAISS (#11333)
Following this tutoral about using OpenAI Embeddings with FAISS

https://python.langchain.com/docs/integrations/vectorstores/faiss

```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader

loader = TextLoader("../../../extras/modules/state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()
```

This works fine

```python
db = FAISS.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
```

But the async version is not

```python
db = await FAISS.afrom_documents(docs, embeddings)  # NotImplementedError
query = "What did the president say about Ketanji Brown Jackson"

docs = await db.asimilarity_search(query) # this will use await asyncio.get_event_loop().run_in_executor under the hood and will not call OpenAIEmbeddings.aembed_query but call OpenAIEmbeddings.embed_query
```

So this PR add async/await supports for FAISS

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Jeff Zhuo 13b89815a3
Issue: fix the issue #11648 init minimax llm (#12554)
e https://github.com/langchain-ai/langchain/issues/11648 Minimax
llm failed to initialize

The idea of this fix is
https://github.com/langchain-ai/langchain/issues/10917#issuecomment-1765606725

do not use  underscore in python model class

---------

Co-authored-by: zhuojianming@cmcm.com <zhuojianming@cmcm.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Florian Valeye bfb27324cb
[Matching Engine] Update the Matching Engine to include the distance and filters (#12555)
Hello 👋,

This Pull Request adds more capability to the
[MatchingEngine](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.matching_engine.MatchingEngine.html)
vectorstore of GCP. It includes the
`similarity_search_by_vector_with_relevance_scores` function and also
[filters](https://cloud.google.com/vertex-ai/docs/vector-search/filtering)
to `filter` the namespaces when retrieving the results.

- **Description:** Add
[filter](https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.MatchingEngineIndexEndpoint#google_cloud_aiplatform_MatchingEngineIndexEndpoint_find_neighbors)
in `similarity_search` and add
`similarity_search_by_vector_with_relevance_scores` method
  - **Dependencies:** None
  - **Tag maintainer:** Unknown

Thank you!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Harrison Chase 1d51363e49
change project template (#12493) 11 months ago
Holt Skinner e53b9ccd70
feat: Add Google Cloud Text-to-Speech Tool (#12572)
- Add Tool for [Google Cloud
Text-to-Speech](https://cloud.google.com/text-to-speech)
- Follows similar structure to [Eleven Labs
Text2Speech](https://python.langchain.com/docs/integrations/tools/eleven_labs_tts)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Adilkhan Sarsen 6e702b9c36
Deep memory support in LangChain (#12268)
- Description: adding support to Activeloop's DeepMemory feature that
boosts recall up to 25%. Added Jupyter notebook showcasing the feature
and also made index params explicit.
- Twitter handle: will really appreciate if we could announce this on
twitter.

---------

Co-authored-by: adolkhan <adilkhan.sarsen@alumni.nu.edu.kz>
11 months ago
billytrend-cohere b1e3843931
Add client_name="langchain" to Cohere usage (#11328)
Hey, we're looking to invest more in adding cohere integrations to
langchain so would love to get more of an idea for how it's used.
Hopefully this pr is acceptable. This week I'm also going to be looking
into adding our new [retrieval augmented generation
product](https://txt.cohere.com/chat-with-rag/) to langchain.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 37aec1e050
bump 326 (#12569) 11 months ago
Eugene Yurtsev 1b1a2d5740
Image Caption accepts bytes for images (#12561)
Accept bytes for images in image caption

---------

Co-authored-by: webcoderz <19884161+webcoderz@users.noreply.github.com>
11 months ago
Nuno Campos 7897483819
Allow astream_log to be used inside atrace_as_chain_group (#12558)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Holt Skinner e05bb938de
Merge pull request #12433
* feat: Add Google Cloud Translation document transformer

* Merge branch 'langchain-ai:master' into google-translate

* Add documentation for Google Translate Document Transformer

* Fix line length error

* Merge branch 'master' into google-translate

* Merge branch 'google-translate' of https://github.com/holtskinner/lan…

* Addressed code review comments

* Merge branch 'master' into google-translate

* Merge branch 'google-translate' of https://github.com/holtskinner/lan…

* Removed extra variable

* Merge branch 'google-translate' of https://github.com/holtskinner/lan…

* Merge branch 'master' into google-translate

* Merge branch 'google-translate' of https://github.com/holtskinner/lan…

* Removed extra import
11 months ago
Samad Koita d1fdcd4fcb
Masking of API Key for GooseAI LLM (#12496)
Description: Add masking of API Key for GooseAI LLM when printed.
Issue: https://github.com/langchain-ai/langchain/issues/12165
Dependencies: None
Tag maintainer: @eyurtsev

---------

Co-authored-by: Samad Koita <>
11 months ago
Andrew Zhou 64c4a698a8
More comprehensive readthedocs document loader (#12382)
## **Description:**
When building our own readthedocs.io scraper, we noticed a couple
interesting things:

1. Text lines with a lot of nested <span> tags would give unclean text
with a bunch of newlines. For example, for [Langchain's
documentation](https://api.python.langchain.com/en/latest/document_loaders/langchain.document_loaders.readthedocs.ReadTheDocsLoader.html#langchain.document_loaders.readthedocs.ReadTheDocsLoader),
a single line is represented in a complicated nested HTML structure, and
the naive `soup.get_text()` call currently being made will create a
newline for each nested HTML element. Therefore, the document loader
would give a messy, newline-separated blob of text. This would be true
in a lot of cases.

<img width="945" alt="Screenshot 2023-10-26 at 6 15 39 PM"
src="https://github.com/langchain-ai/langchain/assets/44193474/eca85d1f-d2bf-4487-a18a-e1e732fadf19">
<img width="1031" alt="Screenshot 2023-10-26 at 6 16 00 PM"
src="https://github.com/langchain-ai/langchain/assets/44193474/035938a0-9892-4f6a-83cd-0d7b409b00a3">

Additionally, content from iframes, code from scripts, css from styles,
etc. will be gotten if it's a subclass of the selector (which happens
more often than you'd think). For example, [this
page](https://pydeck.gl/gallery/contour_layer.html#) will scrape 1.5
million characters of content that looks like this:

<img width="1372" alt="Screenshot 2023-10-26 at 6 32 55 PM"
src="https://github.com/langchain-ai/langchain/assets/44193474/dbd89e39-9478-4a18-9e84-f0eb91954eac">

Therefore, I wrote a recursive _get_clean_text(soup) class function that
1. skips all irrelevant elements, and 2. only adds newlines when
necessary.

2. Index pages (like [this
one](https://api.python.langchain.com/en/latest/api_reference.html))
would be loaded, chunked, and eventually embedded. This is really bad
not just because the user will be embedding irrelevant information - but
because index pages are very likely to show up in retrieved content,
making retrieval less effective (in our tests). Therefore, I added a
bool parameter `exclude_index_pages` defaulted to False (which is the
current behavior — although I'd petition to default this to True) that
will skip all pages where links take up 50%+ of the page. Through manual
testing, this seems to be the best threshold.



## Other Information:
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Tag maintainer:** n/a
  - **Twitter handle:** @andrewthezhou

---------

Co-authored-by: Andrew Zhou <andrew@heykona.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Peter Vandenabeele 3468c038ba
Add unit tests for document_transformers/beautiful_soup_transformer.py (#12520)
- **Description:**
* Add unit tests for document_transformers/beautiful_soup_transformer.py
* Basic functionality is tested (extract tags, remove tags, drop lines)
    * add a FIXME comment about the order of tags that is not preserved
      (and a passing test, but with the expected tags now out-of-order)
  - **Issue:** None
  - **Dependencies:** None
  - **Tag maintainer:** @rlancemartin 
  - **Twitter handle:** `peter_v`

Please make sure your PR is passing linting and testing before
submitting.

=> OK: I ran `make format`, `make test` (passing after install of
beautifulsoup4) and `make lint`.
11 months ago
Anirudh Gautam b257e6a4e8
Mask API key for AI21 LLM (#12418)
- **Description:** Added masking of the API Key for AI21 LLM when
printed and improved the docstring for AI21 LLM.
- Updated the AI21 LLM to utilize SecretStr from pydantic to securely
manage API key.
- Made improvements in the docstring of AI21 LLM. It now mentions that
the API key can also be passed as a named parameter to the constructor.
    - Added unit tests.
  - **Issue:** #12165 
  - **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Anirudh Gautam <anirudh@Anirudhs-Mac-mini.local>
11 months ago
silvhua 9dead1034c
`_dalle_image_url` returns list of urls if n>1 (#11800)
- **Description:** Updated the `_dalle_image_url` method to return a
list of URLs if self.n>1,
  - **Issue:** #10691,
  - **Dependencies:** unsure,
  - **Tag maintainer:** @eyurtsev,
  - **Twitter handle:** @silvhua
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 1815ea2fdb
OpenAI runnable constructor (#12455) 11 months ago
William FH a830b809f3
Patch forward ref bug (#12508)
Currently this gives a bug:
```
from langchain.schema.runnable import RunnableLambda

bound = RunnableLambda(lambda x: x).with_config({"callbacks": []})

# ConfigError: field "callbacks" not yet prepared so type is still a ForwardRef, you might need to call RunnableConfig.update_forward_refs().
```

Rather than deal with cyclic imports and extra load time, etc., I think
it makes sense to just have a separate Callbacks definition here that is
a relaxed typehint.
11 months ago
William FH 36204c2baf
Evaluation Callback Multi Response (#12505)
1. Allow run evaluators to return {"results": [list of evaluation
results]} in the evaluator callback.
2. Allows run evaluators to pick the target run ID to provide feedback
to

(1) means you could do something like a function call that populates a
full rubric in one go (not sure how reliable that is in general though)
rather than splitting off into separate LLM calls - cheaper and less
code to write
(2) means you can provide feedback to runs on subsequent calls.
Immediate use case is if you wanted to add an evaluator to a chat bot
and assign to assign to previous conversation turns


have a corresponding one in the SDK
11 months ago
Harrison Chase 9e0ae56287
various templates improvements (#12500) 11 months ago
0xC9 79cf01366e
Update tool.py (#12472)
In the GoogleSerperResults class, the name field is defined as
'google_serrper_results_json'. This looks like a typo, and perhaps
should be 'google_serper_results_json'.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Harrison Chase eb903e211c
bump to 36 (#12487) 11 months ago
Tyler Hutcherson 4209457bdc
Redis langserve template (#12443)
Add Redis langserve template! Eventually will add semantic caching to
this too. But I was struggling to get that to work for some reason with
the LCEL implementation here.

- **Description:** Introduces the Redis LangServe template. A simple RAG
based app built on top of Redis that allows you to chat with company's
public financial data (Edgar 10k filings)
  - **Issue:** None
- **Dependencies:** The template contains the poetry project
requirements to run this template
  - **Tag maintainer:** @baskaryan @Spartee 
  - **Twitter handle:** @tchutch94

**Note**: this requires the commit here that deletes the
`_aget_relevant_documents()` method from the Redis retriever class that
wasn't implemented. That was breaking the langserve app.

---------

Co-authored-by: Sam Partee <sam.partee@redis.com>
11 months ago
Erick Friis 9adaa78c65
cli improvements (#12465)
Features
- add multiple repos by their branch/repo
- generate `pip install` commands and `add_route()` code
![Screenshot 2023-10-27 at 4 49 52
PM](https://github.com/langchain-ai/langchain/assets/9557659/3aec4cbb-3f67-4f04-8370-5b54ea983b2a)

Optimizations:
- group installs by repo/branch to avoid duplicate cloning
11 months ago
Adam Law df4960a6d8
add reranking to azuresearch (#12454)
-**Description** Adds returning the reranking score when using semantic
search
-**Issue:* #12317

---------

Co-authored-by: Adam Law <adamlaw@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Eugene Yurtsev 60d009f75a
Add security note to API chain (#12452)
Add security note
11 months ago
Matvey Arye 11505f95d3
Improve handling of empty queries for timescale vector (#12393)
**Description:** Improve handling of empty queries in timescale-vector.
For timescale-vector it is more efficient to get a None embedding when
the embedding has no semantic meaning. It allows timescale-vector to
perform more optimizations. Thus, when the query is empty, use a None
embedding.

 Also pass down constructor arguments to the timescale vector client.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Erick Friis 38cee5fae0
cli updates 2 (#12447)
- extras group
- readme
- another readme

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
William FH 5d40e36c75
Trace if run tree set (#12444)
This code path is hit in the following case:
- Start in langchain code and manually provide a tracer
- Handoff to the traceable
- Hand back to langchain code.

Which happens for evaluating `@traceable` functions unfortunately
11 months ago
Bagatur c2a0a6b6df
make doc utils public (#12394) 11 months ago
Henter d6888a90d0
Fix the missing temperature parameter for Baichuan-AI chat_model (#12420)
**Description:** the missing `temperature` parameter for Baichuan-AI
chat_model

Baichuan-AI api doc: https://platform.baichuan-ai.com/docs/api
11 months ago
Erick Friis 6908634428
cli updates oct27 (#12436) 11 months ago
HwangJohn d38c8369b3
added rrf argument in ApproxRetrievalStrategy class __init__() (#11987)
- **Description: To handle the hybrid search with RRF(Reciprocal Rank
Fusion) in the Elasticsearch, rrf argument was added for adjusting
'rank_constant' and 'window_size' to combine multiple result sets with
different relevance indicators into a single result set. (ref:
https://www.elastic.co/kr/blog/whats-new-elastic-enterprise-search-8-9-0),
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** No dependencies changed,
  - **Tag maintainer:** @baskaryan,

Nice to meet you,
I'm a newbie for contributions and it's my first PR.

I only changed the langchain/vectorstores/elasticsearch.py file.
I did make format&lint 
I got this message,
```shell
make lint_diff  
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "langchain/vectorstores/elasticsearch.py" = "" ] || poetry run black langchain/vectorstores/elasticsearch.py --check
All done!  🍰 
1 file would be left unchanged.
[ "langchain/vectorstores/elasticsearch.py" = "" ] || poetry run mypy langchain/vectorstores/elasticsearch.py
langchain/__init__.py: error: Source file found twice under different module names: "mvp.nlp.langchain.libs.langchain.langchain" and "langchain"
Found 1 error in 1 file (errors prevented further checking)
make: *** [lint_diff] Error 2
```

Thank you

---------

Co-authored-by: 황중원 <jwhwang@amorepacific.com>
11 months ago
Roman Vasilyev 2c58dca5f0
optional reusable connection (#12051)
My postgres out of connections after continuous PGVector usage, and the
reason because it constantly creates new connections, so adding a
reusable pre established connection seems like solves an issue

---------

Co-authored-by: Roman Vasilyev <rvasilyev@mozilla.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Ennio Pastore 48fde2004f
Update long_context_reorder.py (#12422)
The function comment was confusing and inaccurate
11 months ago
Bagatur a8c68d4ffa
Type LLMChain.llm as runnable (#12385) 11 months ago
Bagatur d12b88557a
Bagatur/bump 325 (#12440) 11 months ago
Eugene Yurtsev cadfce295f
Deprecate PythonRepl tools and Pandas/Xorbits/Spark DataFrame/Python/CSV agents (#12427)
See discussion here:
https://github.com/langchain-ai/langchain/discussions/11680

The code is available for usage from langchain_experimental. The reason
for the deprecation is that the agents are relying on a Python REPL. The
code can only be run safely with appropriate sandboxing.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Harrison Chase 0ca539eb85
Clean up deprecated agents and update __init__ in experimental (#12231)
Update init paths in experimental
11 months ago
Holt Skinner 134f085824
feat: Add Google Speech to Text API Document Loader (#12298)
- Add Document Loader for Google Speech to Text
  - Similar Structure to [Assembly AI Document Loader][1]

[1]:
https://python.langchain.com/docs/integrations/document_loaders/assemblyai
11 months ago
David Duong 52c194ec3a
Fix templates typos (#12428) 11 months ago
Massimiliano Pronesti c8195769f2
fix(openai-callback): completion count logic (#12383)
The changes introduced in #12267 and #12190 broke the cost computation
of the `completion` tokens for fine-tuned models because of the early
return. This PR aims at fixing this.
@baskaryan.
11 months ago
Stefan Langenbach b22da81af8
Mask API key for Aleph Alpha LLM (#12377)
- **Description:** Add masking of API Key for Aleph Alpha LLM when
printed.
- **Issue**: #12165
- **Dependencies:** None
- **Tag maintainer:** @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
William FH 4254028c52
Str Evaluator Mapper (#12401) 11 months ago
William FH fcad1d2965
Add space (#12395) 11 months ago
William FH 922d7910ef
Wfh/json schema evaluation (#12389)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
11 months ago
Christian Kasim Loan a35445c65f
johnsnowlabs embeddings support (#11271)
- **Description:** Introducing the
[JohnSnowLabsEmbeddings](https://www.johnsnowlabs.com/)
  - **Dependencies:** johnsnowlabs
  - **Tag maintainer:** @C-K-Loan
- **Twitter handle:** https://twitter.com/JohnSnowLabs
https://twitter.com/ChristianKasimL

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
SteveLiao c08b622b2d
Add HTML Title and Page Language into metadata for AsyncHtmlLoader (#11326)
**Description:** 
Revise `libs/langchain/langchain/document_loaders/async_html.py` to
store the HTML Title and Page Language in the `metadata` of
`AsyncHtmlLoader`.
11 months ago
Shorthills AI 25c98dbba9
Fixed some grammatical and Exception types issues (#12015)
Fixed some grammatical issues and Exception types.

@baskaryan , @eyurtsev

---------

Co-authored-by: Sanskar Tanwar <142409040+SanskarTanwarShorthillsAI@users.noreply.github.com>
Co-authored-by: UpneetShorthillsAI <144228282+UpneetShorthillsAI@users.noreply.github.com>
Co-authored-by: HarshGuptaShorthillsAI <144897987+HarshGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: AdityaKalraShorthillsAI <143726711+AdityaKalraShorthillsAI@users.noreply.github.com>
Co-authored-by: SakshiShorthillsAI <144228183+SakshiShorthillsAI@users.noreply.github.com>
11 months ago
William FH 923696b664
Wfh/json edit dist (#12361)
Compare predicted json to reference. First canonicalize (sort keys, rm
whitespace separators), then return normalized string edit distance.

Not a silver bullet but maybe an easy way to capture structure
differences in a less flakey way

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
11 months ago
Erick Friis 4db8d82c55
CLI CI 2 (#12387)
Will run all CI because of _test change, but future PRs against CLI will
only trigger the new CLI one

Has a bunch of file changes related to formatting/linting.

No mypy yet - coming soon
11 months ago
Tyler Hutcherson 231d553824
Update broken redis tests (#12371)
Update broken redis tests -- tiny PR :) 
- **Description:** Fixes Redis tests on master (look like it was broken
by https://github.com/langchain-ai/langchain/pull/11257)
  - **Issue:** None,
  - **Dependencies:** No
  - **Tag maintainer:** @baskaryan @Spartee 
  - **Twitter handle:** N/A

Co-authored-by: Sam Partee <sam.partee@redis.com>
11 months ago
Erick Friis 03e79e62c2
cli fix (#12380) 11 months ago
Bagatur 76230d2c08
fireworks scheduled integration tests (#12373) 11 months ago
Josh Phillips 01c5cd365b
Fix SupbaseVectoreStore write operation timeout (#12318)
**Description**
This small change will make chunk_size a configurable parameter for
loading documents into a Supabase database.

**Issue**
https://github.com/langchain-ai/langchain/issues/11422

**Dependencies**
No chanages

**Twitter**
@ j1philli

**Reminder**
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Greg Richardson <greg.nmr@gmail.com>
11 months ago
Bagatur b10cefb160
lint fix: rm init (#12374) 11 months ago
Harrison Chase b43996e553
Harrison/improve cli (#12368) 11 months ago
Harrison Chase 9ce38726a2
fix some stuff (#12292)
Co-authored-by: Erick Friis <erick@langchain.dev>
11 months ago
Cynthia Yang 6ce276e099
Support Fireworks batching (#8) (#12052)
Description

* Add _generate and _agenerate to support Fireworks batching.
* Add stop words test cases
* Opt out retry mechanism

Issue - Not applicable
Dependencies - None
Tag maintainer - @baskaryan
11 months ago
Tyler Hutcherson 2f0c9d8269
Fix redis vectorfield schema defaults (#12223)
- **Description:** refactors the redis vector field schema to properly
handle default values, includes a new unit test suite.
  - **Issue:** N/A
  - **Dependencies:** nothing new.
  - **Tag maintainer:** @baskaryan @Spartee 
  - **Twitter handle:** this is a tiny fix/improvement :) 

This issue was causing some clients/cuatomers issues when building a
vector index on Redis on smaller db instances (due to fault default
values in index configuration). It would raise an error like:

```redis.exceptions.ResponseError: Vector index initial capacity 20000 exceeded server limit (852 with the given parameters)```

This PR will address this moving forward.
11 months ago
Jakub Novák 9544d64ad8
E2B tool - Improve description wuth uploaded files info (#12355) 11 months ago
Bagatur c6a733802b
bump 324 and 35 (#12352) 11 months ago
Nuno Campos 683e97766d
Fix json key output parser in partial (streaming) mode (#12332)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nikhil Jha dff24285ea
Comprehend Moderation 0.2 (#11730)
This PR replaces the previous `Intent` check with the new `Prompt
Safety` check. The logic and steps to enable chain moderation via the
Amazon Comprehend service, allowing you to detect and redact PII, Toxic,
and Prompt Safety information in the LLM prompt or answer remains
unchanged.
This implementation updates the code and configuration types with
respect to `Prompt Safety`.


### Usage sample

```python
from langchain_experimental.comprehend_moderation import (BaseModerationConfig, 
                                 ModerationPromptSafetyConfig, 
                                 ModerationPiiConfig, 
                                 ModerationToxicityConfig
)

pii_config = ModerationPiiConfig(
    labels=["SSN"],
    redact=True,
    mask_character="X"
)

toxicity_config = ModerationToxicityConfig(
    threshold=0.5
)

prompt_safety_config = ModerationPromptSafetyConfig(
    threshold=0.5
)

moderation_config = BaseModerationConfig(
    filters=[pii_config, toxicity_config, prompt_safety_config]
)

comp_moderation_with_config = AmazonComprehendModerationChain(
    moderation_config=moderation_config, #specify the configuration
    client=comprehend_client,            #optionally pass the Boto3 Client
    verbose=True
)

template = """Question: {question}

Answer:"""

prompt = PromptTemplate(template=template, input_variables=["question"])

responses = [
    "Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.", 
    "Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here."
]
llm = FakeListLLM(responses=responses)

llm_chain = LLMChain(prompt=prompt, llm=llm)

chain = ( 
    prompt 
    | comp_moderation_with_config 
    | {llm_chain.input_keys[0]: lambda x: x['output'] }  
    | llm_chain 
    | { "input": lambda x: x['text'] } 
    | comp_moderation_with_config 
)

try:
    response = chain.invoke({"question": "A sample SSN number looks like this 123-456-7890. Can you give me some more samples?"})
except Exception as e:
    print(str(e))
else:
    print(response['output'])

```

### Output

```python
> Entering new AmazonComprehendModerationChain chain...
Running AmazonComprehendModerationChain...
Running pii Validation...
Running toxicity Validation...
Running prompt safety Validation...

> Finished chain.


> Entering new AmazonComprehendModerationChain chain...
Running AmazonComprehendModerationChain...
Running pii Validation...
Running toxicity Validation...
Running prompt safety Validation...

> Finished chain.
Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like XXXXXXXXXXXX John Doe's phone number is (999)253-9876.
```

---------

Co-authored-by: Jha <nikjha@amazon.com>
Co-authored-by: Anjan Biswas <anjanavb@amazon.com>
Co-authored-by: Anjan Biswas <84933469+anjanvb@users.noreply.github.com>
11 months ago
Blake (Yung Cher Ho) b9410f2b6f
Takeoff pro support (#12070)
**Description:**
This PR adds support for the [Pro version of Titan Takeoff
Server](https://docs.titanml.co/docs/category/pro-features). Users of
the Pro version will have to import the TitanTakeoffPro model, which is
different from TitanTakeoff.

**Issue:**
Also minor fixes to docs for Titan Takeoff (Community version)

**Dependencies:**
No additional dependencies

 **Twitter handle:** @becoming_blake

@baskaryan @hwchase17
11 months ago
Leonid Kuligin 4e47fe1dce
fixed error message and a check for processor name (#12200)
Replace this entire comment with:
- **Description:** a small fix on error description / a check for
processor name
  - **Issue:** the issue #11407
11 months ago
Nir Kopler 9298aff783
Finetuned openai azure models cost calculation (#12267)
**Description:**
Add cost calculation for fine tuned **Azure** with relevant unit tests.
see
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo&pivots=programming-language-studio
for more information.
this PR is the result of this PR:
https://github.com/langchain-ai/langchain/pull/12190

Twitter handle: @nirkopler
11 months ago
gnakw 20fe515f20
Fix the exception from langchain.utilities import ArceeWrapper (#12342)
- **Description:** Fix the exception from langchain.utilities import
ArceeWrapper
11 months ago
Qihui Xie 6720458c7d
add allowed_operators property in QdrantTranslator (#12328)
- **Description:** 
This PR adds `allowd_operators` property to `QdrantTranslator` to fix
the `TypeError: can only join an iterable` bug. This property is
required in `get_query_constructor_prompt` in
`query_constructor\base.py`:
```
allowed_operators=" | ".join(allowed_operators),
```
  - **Issue:** 
#12061

---------

Co-authored-by: XIE Qihui <qihui.xie@bopufund.com>
11 months ago
Bagatur f5a57fc1ef
fix self query constructor (#12349) 11 months ago
Vasek Mlejnsky cdd75b687e
e2b tool - fix initialization and improve tool description (#12345) 11 months ago
Harrison Chase 8ec7aade9f
add docs for templates (#12346) 11 months ago
Erick Friis ebf998acb6
Templates (#12294)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
11 months ago
Erick Friis 43257a295c
CLI Git Improvements (#12311)
- delete repo sources like pip
- git dep fixes
- error messaging
11 months ago
William FH 1d568e1add
Better wrap traceable (#12303)
If user function is wrapped as a traceable function, this will help hand
off the trace between the two.

Also update handling fields to reflect optional values
11 months ago
Eugene Yurtsev 5a71b81609
Relax type annotation for custom input/output types (#12300)
This is needed to be able to do stuff like:

```python
runnable.with_types(input_type=List[str])
```
11 months ago
William FH 988f6d9912
Rm langchain server (#12305) 11 months ago
wemysschen 3f16acc538
Add baidu cloud vector search in vectorstore and fix some unit test in vectorstores (#11605)
**Description:** 
Add baidu cloud vector search in vectorstore

---------

Co-authored-by: root <root@icoding-cwx.bcc-szzj.baidu.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
mrbean b7e559c7e1
use snippet search optionally (#12236)
Add an additional flag which allows for hitting our new endpoint.
11 months ago
felixocker cce132d146
fix sparql queries for relations in schema description (#9136)
- **Description**: Fix for the SPARQL QA chain: fixed SPARQL queries for
retrieving information about relations in the graph to create a textual
description of the schema for the language model. This should resolve
#8907
- **Issue**: #8907
- **Dependencies**: None
- **Tag maintainer**: @baskaryan, @hwchase17
11 months ago
Donato Azevedo d9f1bcf366
Strips leading/trailing whitespace before parsing xml (#12297)
**Description:** When llms output leading or trailing whitespace for xml
(when using XMLOutputParser) the parser would raise a `ValueError: Could
not parse output: ...`. However, leading or trailing whitespace are
"ignorable" in the sense of XML standard.

**Issue:** I did not find an issue related.

**Dependencies:** None

**Tag maintainer:**

**Twitter handle:** donatoaz

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

Done, updated unit test and ran `make docker_test`.
11 months ago
Erick Friis 47070b8314
CLI (#12284)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
11 months ago
Shwu Ku 07c2649753
response parser for ArceeRetriever (#12270)
- **Description:** Response parser for arcee retriever, 
- **Issue:** follow-up pr on #11578 and
[discussion](https://github.com/arcee-ai/arcee-python/issues/15#issuecomment-1759874053),
  - **Dependencies:** NA

This pr implements a parser for the response from ArceeRetreiver to
convert to langchain `Document`. This closes the loop of generation and
retrieval for Arcee DALMs in langchain.

The reference for the response parser is
[api-docs:retrieve](https://api.arcee.ai/docs#/v2/retrieve_model)

Attaching screenshot of working implementation:
<img width="1984" alt="Screenshot 2023-10-25 at 7 42 34 PM"
src="https://github.com/langchain-ai/langchain/assets/65639964/026987b9-34b2-4e4b-b87d-69fcd0c6641a">
\*api key deleted

---
Successful tests, lints, etc.
```shell
Re-run pytest with --snapshot-update to delete unused snapshots.
==================================================================================================================== slowest 5 durations =====================================================================================================================
1.56s call     tests/unit_tests/schema/runnable/test_runnable.py::test_retrying
0.63s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream
0.33s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_stream_iterator_input
0.30s call     tests/unit_tests/schema/runnable/test_runnable.py::test_map_astream_iterator_input
0.20s call     tests/unit_tests/indexes/test_indexing.py::test_cleanup_with_different_batchsize
======================================================================================================= 1265 passed, 270 skipped, 32 warnings in 6.55s =======================================================================================================
[ "." = "" ] || poetry run black .
All done!  🍰 
1871 files left unchanged.
[ "." = "" ] || poetry run ruff --select I --fix .
./scripts/check_pydantic.sh .
./scripts/check_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run black . --check
All done!  🍰 
1871 files would be left unchanged.
[ "." = "" ] || poetry run mypy .
Success: no issues found in 1868 source files
poetry run codespell --toml pyproject.toml
poetry run codespell --toml pyproject.toml -w
```

Co-authored-by: Shubham Kushwaha <shwu@Shubhams-MacBook-Pro.local>
11 months ago
Johanna Appel c26ec7789f
CohereEmbeddings: Add max_retries and request_timeout (#12275)
Add max_retries and request_timeout to CohereEmbeddings, akin to how it
works in OpenAIEmbeddings.

Since the Cohere client already implements these parameters, we can
simply pass them down.

Uses parameters from these two cohere client objects:

https://github.com/cohere-ai/cohere-python/blob/main/cohere/client.py

https://github.com/cohere-ai/cohere-python/blob/main/cohere/client_async.py
11 months ago
Nuno Campos 7108084947
Remove CLI (#12283)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos b5b2d07681
Pop max concurrency when recursing (#12281)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Bagatur 69f4e402e4
bump 323 (#12278) 11 months ago
David Duong c25b174db5
Add serialisation props to Fireworks and ChatFireworks (#12255) 11 months ago
Richard Adams fd5f549a9e
demonstrate use of RetrievalQAWithSourcesChain.from_chain (#12235)
**Description:** 
Documents further usage of RetrievalQAWithSourcesChain in an existing
test. I'd not found much documented usage of RetrievalQAWithSourcesChain
and how to get the sources out. This additional code will hopefully be
useful to other potential users of this retriever.

 **Issue:** No raised issue
 
**Dependencies:** No new dependencies needed to run the test (it already
needs `open-ai`, `faiss-cpu` and `unstructured`).

Note - `make lint` showed 8 linting errors  in unrelated files

---------

Co-authored-by: richarda23 <richard.c.adams@infinityworks.com>
11 months ago
James Braza 53f35c5f5c
Adding `STRUCTURED_FORMAT_SIMPLE_INSTRUCTIONS` missing backticks (#12238)
This PR fixes the fact that `STRUCTURED_FORMAT_SIMPLE_INSTRUCTIONS` was
missing backticks at the end
11 months ago
William FH 276c6ba115
Check for ls project in run tree context (#12242)
If I go traceable -> runnable when the project is manually specified,
the runnable wont be logged. This makes sure the session/project is
threaded through appropriately.
11 months ago
Vasek Mlejnsky 1f8094938f
Integrate E2B's data analysis/code interpreter (#12011)
This PR adds a data [E2B's](https://e2b.dev/) analysis/code interpreter
sandbox as a tool

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Jakub Novak <jakub@e2b.dev>
11 months ago
Bagatur 286a29a49e
bump 322 and 34 (#12228) 11 months ago
Eugene Yurtsev 583dc49477
Add type to Generation and sub-classes, handle root validator (#12220)
* Add a type literal for the generation and sub-classes for serialization purposes.
* Fix the root validator of ChatGeneration to return ValueError instead of KeyError or Attribute error if intialized improperly.
* This change is done for langserve to make sure that llm related callbacks can be serialized/deserialized properly.
11 months ago
Eugene Yurtsev 81052ee18e
Fix code block in runnable doc (#12221)
Fix code block syntax in runnable doc-string
11 months ago
Mikelarg 46e28b9613
Added GigaChat chat model support (#12201)
- **Description:** Added integration with
[GigaChat](https://developers.sber.ru/portal/products/gigachat) language
model.
- **Twitter handle:** @dvoshansky
11 months ago
Anurag Wagh d5c2ce7c2e
[fix] create redis vector index before adding docs, add prefix to doc… (#11257)
Fix Description: 
For Redis Vector integration in add_texts method, there were two issues
that lead to this bug.
1. Vector index is not being created leading to no such_index error 
2. `doc:index` prefix was also missing for Redis Keys. 

resolves #11197 
Maintainer: @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Eugene Yurtsev 079d1f3b8e
Expose handle_event and ahandle_events as public API (#12181)
Expose functionality to handle generic events.
11 months ago
William FH 67c4fd0ad0
Update deprecation (#12178)
in runner_utils
11 months ago
Nir Kopler d3744175bf
Finetuned OpenAI models cost calculation #11715 (#12190)
**Description:**
Add cost calculation for fine tuned models (new and legacy), this is
required after OpenAI added new models for fine tuning and separated the
costs of I/O for fine tuned models.
Also I updated the relevant unit tests
see https://platform.openai.com/docs/guides/fine-tuning for more
information.
issue: https://github.com/langchain-ai/langchain/issues/11715

  - **Issue:** 11715
  - **Twitter handle:** @nirkopler
11 months ago
Spyros a2840a2b42
fix vertexai codey models (#12173)
**Description:**

This PR fixes issue #12156 by checking for Codey models appropriately
before result parsing.


Maintainer: @hwchase17 , @agola11
11 months ago
Hech d76f026d72
Fix flexible dimension and doc for DingoDB (#12187) 11 months ago
Erick Friis 95ae40ff90
Fix Anthropic Functions ainvoke (#12215)
Removes custom `NotImplementedError` in experimental anthropic
functions, allowing it to fallback on default `ainvoke` implementation.
11 months ago
Iskren Ivov Chernev d5d7ba582a
Improvements to llm/deepinfra (#10846)
- replace `requests` package with `langchain.requests`
- add `_acall` support
- add `_stream` and `_astream`
- freshen up the documentation a bit
- update vendor doc
11 months ago
sudranga f09f82541b
Expose configuration options in GraphCypherQAChain (#12159)
Allows for passing arguments into the LLM chains used by the
GraphCypherQAChain. This is to address a request by a user to include
memory in the Cypher creating chain. Will keep the prompt variables
as-is to be backward compatible. But, would be a good idea to deprecate
them and use the **kwargs variables. Added a test case.

In general, I think it would be good for any chain to automatically pass
in a readonlymemory(of its input) to its subchains whilist allowing for
an override. But, this would be a different change.
11 months ago
Leonid Ganeline 11f13aed53
docstrings update (#12093)
Added missed docstrings. Added missed Args:, Returns: Raises:
11 months ago
Johnny Oshika ba20c14e28
Fix typo in stuff_prompt's system_template (#12063)
- **Description:** 

Add missing apostrophe in `user's` in stuff_prompt's system_template.
The first sentence in the system template went from:

> Use the following pieces of context to answer the users question.

to

> Use the following pieces of context to answer the user's question.

- **Issue:** 
- **Dependencies:** none
- **Tag maintainer:** @baskaryan
- **Twitter handle:** ojohnnyo
11 months ago
Holt Skinner 69d9eae5cd
feat: Add Client Info to available Google Cloud Clients (#12168)
- This is used internally to gather aggregate usage metrics for the
LangChain integrations

- Note: This cannot be added to some of the Vertex AI integrations at
this time because the SDK doesn't allow overriding the
[`ClientInfo`](https://googleapis.dev/python/google-api-core/latest/client_info.html#module-google.api_core.client_info)

- Added to:
  - BigQuery
  - Google Cloud Storage
  - Document AI
  - Vertex AI Model Garden
  - Document AI Warehouse
  - Vertex AI Search
  - Vertex AI Matching Engine (Cloud Storage Client)
 
@baskaryan, @eyurtsev, @hwchase17

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
Lukas Wolf 69f5f82804
Update extraction.py (#12207)
Description: Pass tags as argument to create_extraction_chain
Issue: create_extraction_chain does not pass tags to chain yet 

@baskaryan
11 months ago
Nuno Campos 34ffb94770
Remove GetLocal, PutLocal (#12133)
Do you agree?
11 months ago
Eric Hartford 8c150ad7f6
Add COBOL parser and splitter (#11674)
- **Description:** Add COBOL parser and splitter
  - **Issue:** n/a
  - **Dependencies:** n/a
  - **Tag maintainer:** @baskaryan 
  - **Twitter handle:** erhartford

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
John Mai ebf749c40c
Baichuan & Hunyuan set default api_base (#12059)
### Description
Baichuan & Hunyuan set default api_base env
11 months ago
Shilong Dai 99afc1b4f8
Fixed hardcoded "vector" and replaced with vector_query_field variable (#12126)
- **Description:** In the max_marginal_relevance_search function of the
ElasticsearchStore vector store, the name of the field corresponding to
the vector embedding of the document is hard coded in the delete
statement that drops the field from the document metadata. This results
in an exception if the vector embedding field is customized. This PR
changes the hard-coded "vector" into the vector_query_field variable.
  - **Issue:** None
  - **Dependencies:** None
  - **Tag maintainer:** @hwchase17

Co-authored-by: Shilong Dai <sdai@viperfish.net>
11 months ago
Vikram Shitole 0d44746430
10634: Added the capability to inject boto3 client in SagemakerEndpointEmbeddings (#12146)
**Description: Allow to inject boto3 client for Cross account access
type of scenarios in using SagemakerEndpointEmbeddings and also updated
the documentation for same in the sample notebook**

**Issue:SagemakerEndpointEmbeddings cross account capability #10634
#10184**

Dependencies: None
Tag maintainer:
Twitter handle:lethargicoder

Co-authored-by: Vikram(VS) <vssht@amazon.com>
11 months ago
aubin_mzt 66f8cb015d
Add connection args for pgvector vector store (#11930)
- **Description:** sqlalchemy create_engine() does not take into account
connect_args which are mandatory for managed PGSQL instances on cloud
providers (ssl_context for example).
Also re-enabled create_vector_extension at post_init for using pgvector
class seamlessly
- **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Sami Bargaoui <bargaoui.sam@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
11 months ago
NuODaniel 4d6243fa87
fix: doc string of default params in chat_models, llm qianfan (#12153)
- **Description:** a fix of the doc string in Qianfan
  - **Issue:** no
  - **Dependencies:** no
  - **Tag maintainer:** @baskaryan
  - **Twitter handle:** no
11 months ago
Predrag Gruevski f82bdf4613
Update deprecated `langchain` imports with suggested new paths. (#12164)
Let's help our users find the proper import to use instead of the
deprecated top-level ones.
11 months ago
Bagatur 963ff93476
bump 321 (#12161) 11 months ago
Nuno Campos d0505c0d47
Update default recursion_limit, update docs (#12134)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
William FH 4f23aa677a
Fix Pickle Error (#12141)
If non-pickleable objects (like locks) get passed to the tracing
callback, they'll fail in the deepcopy. Fallback to a shallow copy in
these instances .
11 months ago
Predrag Gruevski 95a1b598fe
Update to `actions/checkout@v4`. (#11951)
We don't use any of the new functionality at the moment. Just making
sure we don't fall back on versions and fail to benefit from new
patches. This is an easy upgrade and it's always harder to upgrade
across multiple major versions at once.
11 months ago
William FH 7c4f340cc0
Include Parent Run ID (#12139)
If you set local callbacks
11 months ago
omahs f3cc9bba5b
Fix typos (#12128)
Fix typos
11 months ago
Nuno Campos 1afdb40b48
Add optional config arg to RunnablePassthrough func arg (#12131)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 325fdde8b4
Fix bug where types were lost when calling with_cconfig or bind (#12137)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 02dce74b97
Fix type hint for older py versions (#12132)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos d0ce374731
Allow specifying custom input/output schemas for runnables with .with_types() (#12083)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Harrison Chase ee69116761
move csv agent to langchain experimental (#12113) 11 months ago
Harrison Chase 03bf6ef473
add missing init files (#12114) 11 months ago
Bagatur ef8b180d6d
bump 320 (#12108) 11 months ago
Rotem Weiss 78d186fb44
Add Tavily Search API as a Tool (#12103)
Adding Tavily Search API as a tool. I will be the maintainer and
assaf_elovic is the twitter handler.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 85302a9ec1
Add CI check that integration tests compile (#12090) 11 months ago
verlocks 5dbe456aae
Bug fix tongyi.py to be compatible with DashScope API (#11956)
Current ChatTongyi is not compatible with DashScope API, which will
cause error when passing api key to chat model directly.
- **Description:** Update tongyi.py to be compatible with DashScope API.
Specifically, update parameter name "dashscope_api_key" to "api_key".
  - **Issue:** None.
- **Dependencies:** Nothing new, Tongyi would require DashScope as
before.
11 months ago
Tomaz Bratanic 82f4c0589c
Add neo4j graph environment variables (#12080) 11 months ago
Mohammad Mohtashim d5400f6502
Google Scholar Search Tool using serpapi (#11513)
- **Description:** Implementing the Google Scholar Tool as requested in
PR #11505. The tool will be using the [serpapi python
package](https://serpapi.com/integrations/python#search-google-scholar).
The main idea of the tool will be to return the results from a Google
Scholar search given a query as an input to the tool.

- **Tag maintainer:** @baskaryan, @eyurtsev, @hwchase17
11 months ago
Holt Skinner f5be2d525a
fix: Add `_serving_config` property to `GoogleVertexAISearchRetriever` (#12084)
- Fixes error:

```
ValueError: "GoogleVertexAISearchRetriever" object has no field "_serving_config"
```

Introduced in #11736

@baskaryan, @eyurtsev, @hwchase17 if you could review and merge quickly,
that would be appreciated :)
11 months ago
Nuno Campos 5fee61a207
Support runnable factories in .configurable_alts() (#12065)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Zhitao Xu a4c3a44712
Fix documentation typo in Clickhouse Class (#12047)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
- **Description:** The return info in the documentation for
similarity_search_by_vector and similarity_search_with_relevance_scores
is wrong
11 months ago
William FH 25418b9b4d
Always add run ID (#12046)
in eval callback handler.

Useful if you're using a custom run evaluator and don't want to thread
things through.
11 months ago
Eugene Yurtsev 44d7763580
Add zapier deprecation warning (#12045)
Add zapier deprecation
11 months ago
John Mai 4188f046ec
Add Tencent Hunyuan chat model (#12022)
### Description:
The Tencent Hunyuan model, developed by Tencent, is a large language
model by robust Chinese text generation capabilities, adeptness in
logical reasoning within complex contexts, and reliable task execution
proficiency.For more information, see
[https://cloud.tencent.com/document/product/1729](https://cloud.tencent.com/document/product/1729)
11 months ago
Eugene Yurtsev 68599d98c2
More security notes (#12040)
Add more security notes
11 months ago
Bagatur 0006075b08
bump 319 (#12041) 11 months ago
John Mai 8eb40b5fe2
`baichuan_secret_key` use pydantic.types.SecretStr & Add Baichuan tests (#12031)
### Description
- `baichuan_secret_key` use pydantic.types.SecretStr
- Add Baichuan tests
11 months ago
Nuno Campos 85bac75729
nc/runnable-dynamic-schemas-from-config (#12038)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 85eaa4ccee
Revert "nc/runnable-dynamic-schemas-from-config" (#12037)
This reverts commit a46eef64a7.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos a46eef64a7 nc/runnable-dynamic-schemas-from-config 11 months ago
Nuno Campos d392e030be
Add default value (#12032)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Kenneth Choe 62efe1ffb9
support add_embeddings for elasticsearch (#11002)
- **Description:** Provide a way to use different text for embedding.
- For example, if you are ingesting stack-overflow Q&As for RAG, you
would want to embed the questions and return the answer(s) for the hits.
With this change, the consumer of langchain can implement that easily.
- I noticed the similar function is added on faiss.py with #1912 which
was for performance reason, but I see the same function can be used to
achieve what I thought. So instead of changing Document class to have
embedding_content, I mimicked the implementation of faiss.py.
- The test should provide some guidance on how to use it. It would be
more intuitive if I just pass texts and embedding_texts as separate
arguments, but I chose to use `zip`-ed object for the consistency with
faiss.py implementation.
      - I plan to make similar pull request for OpenSearch.
  - **Issue:** N/A
  - **Dependencies:** None other than the existing ones.

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Bagatur 76d3afaef0
bump 318 (#12030) 11 months ago
Dmitry Tyumentsev 5dd2161c4b
add _acall method to YandexGPT (#12029)
- **Description:** Add async support for YandexGPT LLM model

Co-authored-by: Dmitry Tyumentsev <dmitry.tyumentsev@raftds.com>
11 months ago
Peter Krenesky 8425f33363
Pydantic v2 support for OpenAPI Specs (#11936)
- **Description:** Adding Pydantic v2 support for OpenAPI Specs 

- **Issue:**
- OpenAPI spec support was disabled because `openapi-schema-pydantic`
doesn't support Pydantic v2:
     #9205
     
     - Caused errors in `get_openapi_chain`
   
    - This may be the cause of #9520.

- **Tag maintainer:** @eyurtsev
- **Twitter handle:** kreneskyp


The root cause was that `openapi-schema-pydantic` hasn't been updated in
some time but
[openapi-pydantic](https://github.com/mike-oakley/openapi-pydantic)
forked and updated the project.
11 months ago
Joe McElroy c9f1768cb9
Elasticsearch Query Retriever: Use match + fuzziness for LIKE (#12023)
Updated the elasticsearch self query retriever to use the match clause
for LIKE operator instead of the non-analyzed fuzzy search clause.

Other small updates include:
- fixing the stack inference integration test where the index's default
pipeline didn't use the inference pipeline created
- adding a user-agent to the old implementation to track usage
- improved the documentation for ElasticsearchStore filters
11 months ago
Nuno Campos 7db6aabf65
Update chat model output type (#11833)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Simon Dai ed62984cb2
update Weaviate to support multi tenancy (#11842)
- **Description:** update Weaviate to support multi tenancy
  - **Issue:** 9956
  - **Dependencies:** 
  - **Tag maintainer:** hwchase17
  - **Twitter handle:** dsx1986_
11 months ago
hiigao f818ec49b8
Encapsulate alicloud pai-eas access method for chatmodels and llms (#11852)
### Description: 
To provide an eas llm service access methods in this pull request by
impletementing `PaiEasEndpoint` and `PaiEasChatEndpoint` classes in
`langchain.llms` and `langchain.chat_models` modules. Base on this pr,
langchain users can build up a chain to call remote eas llm service and
get the llm inference results.

### About EAS Service
EAS is a Alicloud product on Alibaba Cloud Machine Learning Platform for
AI which is short for AliCloud PAI. EAS provides model inference
deployment services for the users. We build up a llm inference services
on EAS with a general llm docker images. Therefore, end users can
quickly setup their llm remote instances to load majority of the
hugginface llm models, and serve as a backend for most of the llm apps.

### Dependencies
This pr does't involve any new dependencies.

---------

Co-authored-by: 子洪 <gaoyihong.gyh@alibaba-inc.com>
11 months ago
John Mai a6b483dcbc
Supported RetryOutputParser & RetryWithErrorOutputParser max_retries (#11903)
Description: Supported RetryOutputParser & RetryWithErrorOutputParser
max_retries
- max_retries: Maximum number of retries to parser.

Issue: None
Dependencies: None
Tag maintainer: @baskaryan 
Twitter handle:
11 months ago
Hugues Chocart 008c7df80d
[LLMonitorCallbackHandler] Refactor + add llmonitor-py dependency (#11948)
We now require uses to have the pip package `llmonitor` installed. It
allows us to have cleaner code and avoid duplicates between our library
and our code in Langchain.
11 months ago
Sian Cao 77fc2f7644
fix: impl missing embeddings method (#10823)
FAISS does not implement embeddings method and use embed_query to
embedding texts which is wrong for some embedding models.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Holt Skinner 2661dc94f3
feat: Google Vertex AI Search Retriever - Add support for Website Data Stores (#11736)
- Only works for Data stores with Advanced Website Indexing
-
https://cloud.google.com/generative-ai-app-builder/docs/about-advanced-features
- Minor restructuring - Follow up to #10513
- Remove outdated docs (readded in
https://github.com/langchain-ai/langchain/pull/11620)
  - Move legacy class into new py file to clean up the directory
- Shouldn't cause backwards compatibility issues as the import works the
same way for users
11 months ago
Shorthills AI 4b6fdd7bf0
Update modal.py (#11588)
feat: Raise KeyError when 'prompt' key is missing in JSON response

This commit updates the error handling in the code to raise a KeyError
when the 'prompt' key is not found in the JSON response. This change
makes the code more explicit about the nature of the error, helping to
improve clarity and debugging.

@baskaryan, @eyurtsev.
11 months ago
William FH dfb4baa3f9
Fix Fireworks Callbacks (#12003)
I may be missing something but it seems like we inappropriately overrode
the 'stream()' method, losing callbacks in the process. I don't think
(?) it gave us anything in this case to customize it here?

See new trace:

https://smith.langchain.com/public/fbb82825-3a16-446b-8207-35622358db3b/r

and confirmed it streams.

Also fixes the stopwords issues from #12000
11 months ago
Wang Wei e26559f512
Add ERNIE-Bot-4 model support for ErnieBotChat. (#11969)
- **Description:** According to the document
https://cloud.baidu.com/doc/WENXINWORKSHOP/s/clntwmv7t, add ERNIE-Bot-4
model support for ErnieBotChat.
- **Dependencies:** Before using the ERNIE-Bot-4, you should have the
model's access authority.
11 months ago
Eugene Yurtsev f4bec9686d
Add more security notes (#11990)
Add more security notes
11 months ago
Eugene Yurtsev 3d81c76160
Add security notes to agent toolkits (#11989)
Add more security notes to agent toolkits.
11 months ago
Leonid Ganeline b81a4c1d94
docstrings added (#11988)
Added docstrings. Some docsctrings formatting.
11 months ago
Bagatur 35c7c1f050
bump 317 (#11986) 11 months ago
Bagatur 122af2effe
fix chroma from_texts bug (#11984) 11 months ago
Erick Friis c149954cc5
Hub Runnable (#11946)
Adds `langchain.runnables.hub.HubRunnable` for pulling configurable
objects from the hub
11 months ago
Owen 9e24626e87
chore: remove duplicated export variables (#11962)
- **Description:** remove duplicated `__all__` variables
11 months ago
Nuno Campos 6bd9c1d2b3
Make prompt validation opt-in (#11973)
By default replace input_variables with the correct value

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 9bc7e1851a
Ensure dict() does not raise not implemented error, which should instead be raised in our custom method save() (#11970)
.dict() is a Pydantic method that cannot raise exceptions, as it is used
eg. in `__eq__`

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 653cf56e0e Lint 11 months ago
Predrag Gruevski debcf053eb
Fix `invalid escape sequence` warnings by using raw strings for regexes. (#11943)
This code also generates warnings when our users' apps hit it, which is
annoying and doesn't look great. Let's fix it.
11 months ago
Nuno Campos e4ae690244 Sort order 11 months ago
Nuno Campos b753bf3323 Make prompt validation opt-in
By default replace input_variables with the correct value
11 months ago
Nuno Campos 202acce0c9 Ensure dict() does not raise not implemented error, which should instead be raised in our custom method save() 11 months ago
Predrag Gruevski 392df7b2e3
Type hints on varargs and kwargs that take anything should be `Any`. (#11950)
Type hinting `*args` as `List[Any]` means that each positional argument
should be a list. Type hinting `**kwargs` as `Dict[str, Any]` means that
each keyword argument should be a dict of strings.

This is almost never what we actually wanted, and doesn't seem to be
what we want in any of the cases I'm replacing here.
11 months ago
Eugene Yurtsev 908c7bf33e
Add documentation to tools (#11938)
Add security notes to tools

---------

Co-authored-by: Predrag Gruevski <2348618+obi1kenobi@users.noreply.github.com>
11 months ago
Eugene Yurtsev 43dc669332
Update playwright documentation (#11949)
Add security note to playwright tool
11 months ago
Daniel Chalef 2beb767ae5
zep: Memory Retriever MMR Support & Docs Updates (#11954)
- Update Zep Memory and Retriever docstrings
- Zep Memory Retriever: Add support for native MMR
- Add MMR example to existing ZepRetriever Notebook

@baskaryan
11 months ago
William FH a27fa9bf10
Use traceable context (#11896)
Example

```
from langchain.schema.runnable import RunnableLambda
from langsmith import traceable

chain = RunnableLambda(lambda x: x)

@traceable(run_type = "chain")
def my_traceable(a):
    chain.invoke(a)
my_traceable(5)
```

Would have a nested result.

This would NOT work for interleaving chains and traceables. E.g., things
like thiswould still not work well

```
from langchain.schema.runnable import RunnableLambda
from langsmith import traceable

@traceable()
def other_traceable(a):
    return a

def foo(x):
    return other_traceable(x)
    
chain = RunnableLambda(foo)

@traceable(run_type = "chain")
def my_traceable(a):
    chain.invoke(a)
my_traceable(5)
```
11 months ago
Predrag Gruevski dcd0392423
Upgrade to newer black (23.10) and ruff (first 0.1.x!) versions. (#11944)
Minor lint dependency version upgrade to pick up latest functionality.

Ruff's new v0.1 version comes with lots of nice features, like
fix-safety guarantees and a preview mode for not-yet-stable features:
https://astral.sh/blog/ruff-v0.1.0
11 months ago
Trayan Azarov 1fd21ed21c
Chroma batching (#11203)
- **Description:** Chroma >= 0.4.10 added support for batch sizes
validation of add/upsert. This batch size is dependent on the SQLite
limits of the target system and varies. In this change, for
Chroma>=0.4.10 batch splitting was added as the aforementioned
validation is starting to surface in the Chroma community (users using
LC)
 - **Issue:** N/A
 - **Dependencies:** N/A
 - **Tag maintainer:** @eyurtsev
 - **Twitter handle:** t_azarov
11 months ago
Guy Korland 9373b9c004
Add Graph interface (#11012)
Replace this entire comment with:
  - **Description:** Add a Graph interface
  - **Tag maintainer:** @baskaryan @hwchase17 
  - **Twitter handle:** @g_korland
11 months ago
DanielZzz b647505280
feat: support ChatModels Qianfan `QianfanChatEndpoint` function_call (#11107)
- **Description:** 
* feature for `QianfanChatEndpoint` function_call ability, add
integration_test for it
    * add `model`, `endpoint` supported in calling params
    * add raw response in ChatModel Message
- **Issue:** 
    * #10867 
    * #11105 
    * #10215
- **Dependencies:** no
- **Tag maintainer:** @baskaryan 
- **Twitter handle:** no
11 months ago
M Bharat lal 67300567d3
GCSFileLoader retrieve blob custom metadata and append to document metadata (#11066)
- **Description:** GCSFileLoader retrieve blob's custom metadata and
append to document's metadata
- **Issue:** #9975,
- **Tag maintainer:** @baskaryan please review

Co-authored-by: b0l00ib <bharat.lal@walmart.com>
11 months ago
billytrend-cohere f4742dce50
Add Cohere retrieval augmented generation to retrievers (#11483)
Add Cohere retrieval augmented generation to retrievers

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
刘 方瑞 0a24ac7388
Revised notebook and add delete to MyScale vector store (#11848)
- **Description:** 
  - Add `.delete` to myscale vector store. 
  - Revised vector store notebooks
- **Tag maintainer:** @baskaryan 
- **Twitter handle:** @myscaledb @mpsk_liu
11 months ago
John Mai 3fb5e4d185
Add Baichuan chat model (#11923)
Description: A large language models developed by Baichuan Intelligent
Technology,https://www.baichuan-ai.com/home
Issue: None
Dependencies: None
Tag maintainer:
Twitter handle:
11 months ago
Eugene Yurtsev 9ecb7240a4
Add security note to recursive url loader (#11934)
Add security note to recursive loader
11 months ago
maks-operlejn-ds 42dcc502c7
Anonymizer small fixes (#11915) 11 months ago
Eugene Yurtsev 90e9ec6962
Sitemap specify default filter url (#11925)
Specify default filter URL in sitemap loader and add a security note

---------

Co-authored-by: Predrag Gruevski <2348618+obi1kenobi@users.noreply.github.com>
11 months ago
Bagatur ba0d729961
bump 316 (#11928) 11 months ago
Eugene Yurtsev 12d7eaa0c2
Add security notices to toolkits (#11900)
This adds security notices to toolkits init, and to several toolkits.
We'll need to continue documenting the rest of the toolkits.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Eugene Yurtsev 5f4a697ce3
Add deprecation warnings (#11899)
Add deprecation warnings

Co-authored-by: Bagatur <baskaryan@gmail.com>
11 months ago
Nuno Campos 8b79cf9566
Add lock for using global config enum weak map (#11920)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 2a8ded6c8c
Export merge_configs function (#11916)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
11 months ago
Nuno Campos 778e7c526e Add comment 11 months ago
Nuno Campos 19319e1746 Allow configs with None values 11 months ago
Nuno Campos b0d5882fe1 Export merge_configs function 11 months ago
Nuno Campos 12596b9a9b Add validation for configurable keys passed to .with_config()
- Fix some typing issues found while doing that
11 months ago
Nuno Campos 754aca794f remove print 11 months ago
Nuno Campos cf448a6314 Ensure that configurable fields with enums support deduplication 11 months ago
Leonid Ganeline 31f264169d
evaluation criteria (#11681)
the updated value was:
` Criteria.MISOGYNY: "Is the submission misogynistic? If so, respond Y."
`
The " If so, respond Y." should not be here. This sub-string is not
presented in any other criteria and should not be presented here.
I also added a synonym to "misogynistic" as it done in many other
criteria.
11 months ago