Commit Graph

1344 Commits (9f39c23a13ebc031d852b69e1a694b446c8997e6)

Author SHA1 Message Date
Nuno Campos b8e3e1118d Skip for py3.8 12 months ago
William FH db05ea2b78
Add from_embeddings for opensearch (#10957) 12 months ago
William FH 73693c18fc
Add support for project metadata in run_on_dataset (#11200) 12 months ago
James Braza b11f21c25f
Updated `LocalAIEmbeddings` docstring to better explain why `openai` (#10946)
Fixes my misgivings in
https://github.com/langchain-ai/langchain/issues/10912
12 months ago
Eugene Yurtsev 2c114fcb5e
Fix web-base loader (#11135)
Fix initialization

https://github.com/langchain-ai/langchain/issues/11095
12 months ago
jreinjr 3bc44b01c0
Typo fix to MathpixPDFLoader - changed processed_file_format default … (#10960)
…from mmd to md. https://github.com/langchain-ai/langchain/issues/7282

<!-- 
- **Description:** minor fix to a breaking typo - MathPixPDFLoader
processed_file_format is "mmd" by default, doesn't work, changing to
"md" fixes the issue,
- **Issue:** 7282
(https://github.com/langchain-ai/langchain/issues/7282),
  - **Dependencies:** none,
  - **Tag maintainer:** @hwchase17,
  - **Twitter handle:** none
 -->

Co-authored-by: jare0530 <7915+jare0530@users.noreply.ghe.oculus-rep.com>
12 months ago
Dr. Fabien Tarrade 66415eed6e
Support new version of tiktoken that are working with langchain (tag "^0.3.2" => "">=0.3.2,<0.6.0" and python "^3.9" =>">=3.9") (#11006)
- **Description:**
be able to use langchain with other version than tiktoken 0.3.3 i.e
0.5.1
  - **Issue:**
cannot installed the conda-forge version since it applied all optional
dependency:
       https://github.com/conda-forge/langchain-feedstock/pull/85  
replace "^0.3.2" by "">=0.3.2,<0.6.0" and "^3.9" by python=">=3.9"
      Tested with python 3.10, langchain=0.0.288 and tiktoken==0.5.0

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Clément Sicard 1b48d6cb8c
`LlamaCppEmbeddings`: adds `verbose` parameter, similar to `llms.LlamaCpp` class (#11038)
## Description

As of now, when instantiating and during inference, `LlamaCppEmbeddings`
outputs (a lot of) verbose when controlled from Langchain binding - it
is a bit annoying when computing the embeddings of long documents, for
instance.

This PR adds `verbose` for `LlamaCppEmbeddings` objects to be able
**not** to print the verbose of the model to `stderr`. It is natively
supported by `llama-cpp-python` and directly passed to the library – the
PR is hence very small.

The value of `verbose` is `True` by default, following the way it is
defined in [`LlamaCpp` (`llamacpp.py`
#L136-L137)](c87e9fb2ce/libs/langchain/langchain/llms/llamacpp.py (L136-L137))

## Issue

_No issue linked_

## Dependencies

_No additional dependency needed_

## To see it in action

```python
from langchain.embeddings import LlamaCppEmbeddings

MODEL_PATH = "<path_to_gguf_file>"

if __name__ == "__main__":
    llm_embeddings = LlamaCppEmbeddings(
        model_path=MODEL_PATH,
        n_gpu_layers=1,
        n_batch=512,
        n_ctx=2048,
        f16_kv=True,
        verbose=False,
    )
```

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Noah Czelusta a00a73ef18
Add last_edited_time and created_time props to NotionDBLoader (#11020)
# Description

Adds logic for NotionDBLoader to correctly populate `last_edited_time`
and `created_time` fields from [page
properties](https://developers.notion.com/reference/page#property-value-object).

There are no relevant tests for this code to be updated.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Eugene Yurtsev e06e84b293
LangServe: Relax requirements (#11198)
Relax requirements
12 months ago
PaperMoose 5d7c6d1bca
Synthetic Data generation (#9472)
---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Donatas Remeika a4e0cf6300
SearchApi integration (#11023)
Based on the customers' requests for native langchain integration,
SearchApi is ready to invest in AI and LLM space, especially in
open-source development.

- This is our initial PR and later we want to improve it based on
customers' and langchain users' feedback. Most likely changes will
affect how the final results string is being built.
- We are creating similar native integration in Python and JavaScript.
- The next plan is to integrate into Java, Ruby, Go, and others.
- Feel free to assign @SebastjanPrachovskij as a main reviewer for any
SearchApi-related searches. We will be glad to help and support
langchain development.
12 months ago
Bagatur 8cd18a48e4
fix trubrics lint issue (#11202) 12 months ago
Fynn Flügge b738ccd91e
chore: add support for TypeScript code splitting (#11160)
- **Description:** Adds typescript language to `TextSplitter`

---------

Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
12 months ago
Kenneth Choe 17fcbed92c
Support add_embeddings for opensearch (#11050)
- **Description:**
      -  Make running integration test for opensearch easy
- Provide a way to use different text for embedding: refer to #11002 for
more of the use case and design decision.
  - **Issue:** N/A
  - **Dependencies:** None other than the existing ones.
12 months ago
Jeff Kayne c586f6dc1b
Callback integration for Trubrics (#11059)
After contributing to some examples in the
[langsmith-cookbook](https://github.com/langchain-ai/langsmith-cookbook)
with @hinthornw, here is a PR that adds a callback handler to use
LangChain with [Trubrics](https://github.com/trubrics/trubrics-sdk).
12 months ago
Michael Landis a8db594012
fix: short-circuit black and mypy calls when no changes made (#11051)
Both black and mypy expect a list of files or directories as input.
As-is the Makefile computes a list files changed relative to the last
commit; these are passed to black and mypy in the `format_diff` and
`lint_diff` targets. This is done by way of the Makefile variable
`PYTHON_FILES`. This is to save time by skipping running mypy and black
over the whole source tree.

When no changes have been made, this variable is empty, so the call to
black (and mypy) lacks input files. The call exits with error causing
the Makefile target to error out with:

```bash
$ make format_diff
poetry run black
Usage: black [OPTIONS] SRC ...

One of 'SRC' or 'code' is required.
make: *** [format_diff] Error 1
```

This is unexpected and undesirable, as the naive caller (that's me! 😄 )
will think something else is wrong. This commit smooths over this by
short circuiting when `PYTHON_FILES` is empty.
12 months ago
Michael Kim fbcd8e02f2
Change type annotations from LLMChain to Chain in MultiPromptChain (#11082)
- **Description:** The types of 'destination_chains' and 'default_chain'
in 'MultiPromptChain' were changed from 'LLMChain' to 'Chain'. and
removed variables declared overlapping with the parent class
- **Issue:** When a class that inherits only Chain and not LLMChain,
such as 'SequentialChain' or 'RetrievalQA', is entered in
'destination_chains' and 'default_chain', a pydantic validation error is
raised.
-  -  codes
```
retrieval_chain = ConversationalRetrievalChain(
        retriever=doc_retriever,
        combine_docs_chain=combine_docs_chain,
        question_generator=question_gen_chain,
    )
    
    destination_chains = {
        'retrieval': retrieval_chain,
    }
    
    main_chain = MultiPromptChain(
        router_chain=router_chain,
        destination_chains=destination_chains,
        default_chain=default_chain,
        verbose=True,
    )
```

 `make format`, `make lint` and `make test`
12 months ago
Piyush Jain 32d09bcd1e
Expanded version range for networkx, fixed sample notebook (#11094)
## Description
Expanded the upper bound for `networkx` dependency to allow installation
of latest stable version. Tested the included sample notebook with
version 3.1, and all steps ran successfully.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Piotr Mardziel b40ecee4b9
FIx eval prompt (#11087)
**Description:** fixes a common typo in some of the eval criteria.
12 months ago
Guy Korland 5564833bd2
Add `add_graph_documents` support for FalkorDBGraph (#11122)
Adding `add_graph_documents` support for FalkorDBGraph and extending the
`Neo4JGraph` api so it can support `cypher.py`
12 months ago
Tomaz Bratanic 7d25a65b10
add from_existing_graph to neo4j vector (#11124)
This PR adds the option to create a Neo4jvector instance from existing
graph, which embeds existing text in the database and creates relevant
indices.
12 months ago
Noah Stapp 2c952de21a
Add support for MongoDB Atlas $vectorSearch vector search (#11139)
Adds support for the `$vectorSearch` operator for
MongoDBAtlasVectorSearch, which was announced at .Local London
(September 26th, 2023). This change maintains breaks compatibility
support for the existing `$search` operator used by the original
integration (https://github.com/langchain-ai/langchain/pull/5338) due to
incompatibilities in the Atlas search implementations.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Hugues b599f91e33
LLMonitor Callback handler: fix bug (#11128)
Here is a small bug fix for the LLMonitor callback handler. I've also
added user identification capabilities.
12 months ago
William FH e9b51513e9
Shared Executor (#11028) 12 months ago
Justin Plock 926e4b6bad
[Feat] Add optional client-side encryption to DynamoDB chat history memory (#11115)
**Description:** Added optional client-side encryption to the Amazon
DynamoDB chat history memory with an AWS KMS Key ID using the [AWS
Database Encryption SDK for
Python](https://docs.aws.amazon.com/database-encryption-sdk/latest/devguide/python.html)
**Issue:** #7886
**Dependencies:**
[dynamodb-encryption-sdk](https://pypi.org/project/dynamodb-encryption-sdk/)
**Tag maintainer:**  @hwchase17 
**Twitter handle:** [@jplock](https://twitter.com/jplock/)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
12 months ago
Eugene Yurtsev 4947ac2965
Add langserve version (#11195)
Add langserve version
12 months ago
Joseph McElroy 822fc590d9
[ElasticsearchStore] Improve migration text to ElasticsearchStore (#11158)
We noticed that as we have been moving developers to the new
`ElasticsearchStore` implementation, we want to keep the
ElasticVectorSearch class still available as developers transition
slowly to the new store.

To speed up this process, I updated the blurb giving them a better
recommendation of why they should use ElasticsearchStore.
12 months ago
Naveen Tatikonda 9b0029b9c2
[OpenSearch] Add Self Query Retriever Support to OpenSearch (#11184)
### Description
Add Self Query Retriever Support to OpenSearch

### Maintainers
@rlancemartin, @eyurtsev, @navneet1v

### Twitter Handle
@OpenSearchProj

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
12 months ago
Arthur Telders 0da484be2c
Add source metadata to OutlookMessageLoader (#11183)
Description: Add "source" metadata to OutlookMessageLoader

This pull request adds the "source" metadata to the OutlookMessageLoader
class in the load method. The "source" metadata is required when
indexing with RecordManager in order to sync the index documents with a
source.

Issue: None

Dependencies: None

Twitter handle: @ATelders

Co-authored-by: Arthur Telders <arthur.telders@roquette.com>
12 months ago
Bagatur 3508e582f1
add anthropic scheduled tests and unit tests (#11188) 12 months ago
Eugene Yurtsev fd96878c4b
Fix anthropic secret key when passed in via init (#11185)
Fixes anthropic secret key when passed via init

https://github.com/langchain-ai/langchain/issues/11182
12 months ago
Bagatur f201d80d40
temporarily skip embedding empty string test (#11187) 12 months ago
Eugene Yurtsev b3cf9c8759
LangServe: Update langchain requirement for publishing (#11186)
Update langchain requirement for publishing
12 months ago
mani2348 89ddc7cbb6
Update Bedrock service name to "bedrock-runtime" and model identifiers (#11161)
- **Description:** Bedrock updated boto service name to
"bedrock-runtime" for the InvokeModel and InvokeModelWithResponseStream
APIs. This update also includes new model identifiers for Titan text,
embedding and Anthropic.

Co-authored-by: Mani Kumar Adari <maniadar@amazon.com>
12 months ago
Eugene Yurtsev de3e25683e
Expose lc_id as a classmethod (#11176)
* Expose LC id as a class method 
* User should not need to know that the last part of the id is the class
name
12 months ago
Nuno Campos 5ca461160b Lint 12 months ago
Nuno Campos 151f27d502 Lint 12 months ago
Eugene Yurtsev 4ba9c16f74 mypy 12 months ago
Eugene Yurtsev 44489e7029
LangServe: Clean up init files (#11174)
Clean up init files
12 months ago
Akio Nishimura 785b9d47b7
Fix stop key of TextGen. (#11109)
The key of stopping strings used in text-generation-webui api is
[`stopping_strings`](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py#L51),
not `stop`.
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
12 months ago
Eugene Yurtsev d1d7d0cb27 x 12 months ago
Eugene Yurtsev c86b2b5e42 x 12 months ago
Eugene Yurtsev fe4f3b8fdf x 12 months ago
Eugene Yurtsev a5b15e9d0f x 12 months ago
Nuno Campos 5c1f462bb9 Implement better reprs for Runnables 12 months ago
Nan LI 53a9d6115e
Xata chat memory FIX (#11145)
- **Description:** Changed data type from `text` to `json` in xata for
improved performance. Also corrected the `additionalKwargs` key in the
`messages()` function to `additional_kwargs` to adhere to `BaseMessage`
requirements.
- **Issue:** The Chathisroty.messages() will return {} of
`additional_kwargs`, as the name is wrong for `additionalKwargs` .
  - **Dependencies:**  N/A
  - **Tag maintainer:** N/A
  - **Twitter handle:** N/A

My PR is passing linting and testing before submitting.
12 months ago
William FH 8ae9b71e41
Async support for OpenAIFunctionsAgentOutputParser (#11140) 12 months ago
Bagatur ce08f436db
Expose loads and dumps in load namespace 12 months ago
Nuno Campos cfa2203c62
Add input/output schemas to runnables (#11063)
This adds `input_schema` and `output_schema` properties to all
runnables, which are Pydantic models for the input and output types
respectively. These are inferred from the structure of the Runnable as
much as possible, the only manual typing needed is
- optionally add type hints to lambdas (which get translated to
input/output schemas)
- optionally add type hint to RunnablePassthrough

These schemas can then be used to create JSON Schema descriptions of
input and output types, see the tests

- [x] Ensure no InputType and OutputType in our classes use abstract
base classes (replace with union of subclasses)
- [x] Implement in BaseChain and LLMChain
- [x] Implement in RunnableBranch
- [x] Implement in RunnableBinding, RunnableMap, RunnablePassthrough,
RunnableEach, RunnableRouter
- [x] Implement in LLM, Prompt, Chat Model, Output Parser, Retriever
- [x] Implement in RunnableLambda from function signature
- [x] Implement in Tool

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
12 months ago
Eugene Yurtsev b05bb9e136
LangServe (#11046)
Adds LangServe package

* Integrate Runnables with Fast API creating Server and a RemoteRunnable
client
* Support multiple runnables for a given server
* Support sync/async/batch/abatch/stream/astream/astream_log on the
client side (using async implementations on server)
* Adds validation using annotations (relying on pydantic under the hood)
-- this still has some rough edges -- e.g., open api docs do NOT
generate correctly at the moment
* Uses pydantic v1 namespace

Known issues: type translation code doesn't handle a lot of types (e.g.,
TypedDicts)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
12 months ago
Nuno Campos 77ce9ed6f1
Support using async callback handlers with sync callback manager (#10945)
The current behaviour just calls the handler without awaiting the
coroutine, which results in exceptions/warnings, and obviously doesn't
actually execute whatever the callback handler does

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
12 months ago
Bagatur 48a04aed75
bump 304 (#11147) 1 year ago
Jonathan Evans 23065f54c0
Added prompt wrapping for Claude with Bedrock (#11090)
- **Description:** Prompt wrapping requirements have been implemented on
the service side of AWS Bedrock for the Anthropic Claude models to
provide parity between Anthropic's offering and Bedrock's offering. This
overnight change broke most existing implementations of Claude, Bedrock
and Langchain. This PR just steals the the Anthropic LLM implementation
to enforce alias/role wrapping and implements it in the existing
mechanism for building the request body. This has also been tested to
fix the chat_model implementation as well. Happy to answer any further
questions or make changes where necessary to get things patched and up
to PyPi ASAP, TY.
- **Issue:** No issue opened at the moment, though will update when
these roll in.
  - **Dependencies:** None

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
xiaoyu b87cc8b31e
add 3 property types in metadata for notiondb loader (#8509)
### Description: 
NotionDB supports a number of common property types. I have found three
common types that are not included in notiondb loader. When programs
loaded them with notiondb, which will cause some metadata information
not to be passed to langchain. Therefore, I added three common types:
- date
- created_time
- last_edit_time.

### Issue: 
no
### Dependencies: 
No dependencies added :)
### Tag maintainer: 
@rlancemartin, @eyurtsev
### Twitter handle: 
@BJTUTC
1 year ago
Harrison Chase 258d67b0ac
Revert "improve the performance of base.py" (#11143)
Reverts langchain-ai/langchain#8610

this is actually an oversight - this merges all dfs into one df. we DO
NOT want to do this - the idea is we work and manipulate multiple dfs
1 year ago
Mohamad Zamini 9306394078
improve the performance of base.py (#8610)
This removes the use of the intermediate df list and directly
concatenates the dataframes if path is a list of strings. The pd.concat
function combines the dataframes efficiently, making it faster and more
memory-efficient compared to appending dataframes to a list.

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Mincoolee 05b75f3f13
feat: add support for arxiv identifier in ArxivAPIWrapper() (#9318)
- Description: this PR adds the support for arxiv identifier of the
ArxivAPIWrapper. I modified the `run()` and `load()` functions in
`arxiv.py`, using regex to recognize if the query is in the form of
arxiv identifier (see
[https://info.arxiv.org/help/find/index.html](https://info.arxiv.org/help/find/index.html)).
If so, it will directly search the paper corresponding to the arxiv
identifier. I also modified and added tests in `test_arxiv.py`.
  - Issue: #9047 
  - Dependencies: N/A
  - Tag maintainer: N/A

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
William FH d3c2ca5656
Enhanced pairwise error (#11131) 1 year ago
Taqi Jaffri b7e9db5e73
Stop sequences in fireworks, plus notebook updates (#11136)
The new Fireworks and FireworksChat implementations are awesome! Added
in this PR https://github.com/langchain-ai/langchain/pull/11117 thank
you @ZixinYang

However, I think stop words were not plumbed correctly. I've made some
simple changes to do that, and also updated the notebook to be a bit
clearer with what's needed to use both new models.


---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
1 year ago
William FH 33da8bd711
Add Exact match and Regex Match Evaluators (#11132) 1 year ago
Harrison Chase e355606b11
add more import checks (#11033) 1 year ago
Dan Bolser efb7c459a2
Update base.py (#10843)
Fixing a typo in the example code in the docstring...

You have to start somewhere though right?

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
tanujtiwari-at a79f595543
Support extra tools argument for pandas agent toolkit (#11040)
**Description** 

We support adding new tools in some toolkits already like the [SQLAgent
toolkit](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/agent_toolkits/sql/base.py#L27).

Related
[SO](https://stackoverflow.com/questions/76583163/are-langchain-toolkits-able-to-be-modified-can-we-add-tools-to-a-pandas-datafra)
thread
This replicates the same functionality here, so users can add custom
bespoke tools.
1 year ago
Bagatur 410ac8129d
bump 303 (#11120) 1 year ago
Bagatur 8e4dbae428
Add fireworks chat model (#11117) 1 year ago
Bagatur 657581dbdf Fix ChatFireworks typing 1 year ago
Bagatur 12aad659dd add ChatFireworks to chat_models 1 year ago
Bagatur 872ebdaf90 remove FireworksChat from llms 1 year ago
Bagatur 9451240941 Fix fireworks chat linting issues 1 year ago
Tomáš Dvořák 865a21938c
speed up enforce_stop_tokens helper function (#10984)
**Description:**

As long as `enforce_stop_tokens` returns a first occurrence, we can
speed up the execution by setting the optional `maxsplit` parameter to
1.

Tag maintainer:
@agola11
@hwchase17

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Austin Walker bb41252dab
fix: bump min_unstructured_version for UnstructuredAPIFileLoader (#11025)
**Description:** New metadata fields were added to
`unstructured==0.10.15`, and our hosted api has been updated to reflect
this. When users call `partition_via_api` with an older version of the
library, they'll hit a parsing error related to the new fields.
1 year ago
William FH 75b3893daf
Fix runnable branch callbacks (#11091)
We aren't calling on_chain_end here unless we use the default option
1 year ago
Bagatur 6c5251feb0 poetry 1 year ago
Bagatur 5310184f96 poetry 1 year ago
Cynthia Yang 6dd44ff1c0
Refactor Fireworks and add ChatFireworks (#3) (#10597)
Description 
* Refactor Fireworks within Langchain LLMs.
* Remove FireworksChat within Langchain LLMs.
* Add ChatFireworks (which uses chat completion api) to Langchain chat
models.
* Users have to install `fireworks-ai` and register an api key to use
the api.

Issue - Not applicable
Dependencies - None
Tag maintainer - @rlancemartin @baskaryan
1 year ago
Bagatur 5514ebe859
Don't type chains in output_parsers (#11092)
Can't use TYPE_CHECKING style imports for pydantic params because it will try to instantiate the typed object by default.
1 year ago
CG80499 64385c4eae
Make pairwise comparison chain more like LLM as a judge (#11013)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:**: Adds LLM as a judge as an eval chain
  - **Tag maintainer:** @hwchase17 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
1 year ago
Joseph McElroy 175ef0a55d
[ElasticsearchStore] Enable custom Bulk Args (#11065)
This enables bulk args like `chunk_size` to be passed down from the
ingest methods (from_text, from_documents) to be passed down to the bulk
API.

This helps alleviate issues where bulk importing a large amount of
documents into Elasticsearch was resulting in a timeout.

Contribution Shoutout
- @elastic

- [x] Updated Integration tests

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev d19fd0cfae
LogEntry/LogStream use str instead of uuid for id (#11080)
Cast the UUID to a string
1 year ago
Bagatur d85339b9f2
extract sublinks exclude by abs path (#11079) 1 year ago
Bagatur 7ee8b2d1bf
exclude dirs in async recursive loading (#11077) 1 year ago
Bagatur 12fb393a43
bump 302 (#11070) 1 year ago
Bagatur 097ecef06b
refactor web base loader (#11057) 1 year ago
Bagatur 487611521d
fix root import (#11072) 1 year ago
Bagatur a2f7246f0e
skip excluded sublinks before recursion (#11036) 1 year ago
William FH 4aec587979
Update LangSmith Walkthrough (#11043) 1 year ago
Harrison Chase bea78b3271
make warnings more modular (#11047) 1 year ago
Harrison Chase c87e9fb2ce
conditional imports (#11017) 1 year ago
Tomaz Bratanic 0625ab7a9e
Filtering graph schema for Cypher generation (#10577)
Sometimes you don't want the LLM to be aware of the whole graph schema,
and want it to ignore parts of the graph when it is constructing Cypher
statements.
1 year ago
Palau 89ef440c14
Kay retriever (#10657)
- **Description**: Adding retrievers for [kay.ai](https://kay.ai) and
SEC filings powered by Kay and Cybersyn. Kay provides context as a
service: it's an API built for RAG.
- **Issue**: N/A
- **Dependencies**: Just added a dep to the
[kay](https://pypi.org/project/kay/) package
- **Tag maintainer**: @baskaryan @hwchase17 Discussed in slack
- **Twtter handle:** [@vishalrohra_](https://twitter.com/vishalrohra_)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase 5f13668fa0
Harrison/move vectorstore base (#11030) 1 year ago
Eugene Yurtsev af5390d416
Add a batch size for cleanup (#10948)
Add pagination to indexing cleanup to deal with large numbers of
documents that need to be deleted.
1 year ago
Eugene Yurtsev 09486ed188
Update Serializable to use classmethods (#10956) 1 year ago
Taqi Jaffri b7290f01d8
Batching for hf_pipeline (#10795)
The huggingface pipeline in langchain (used for locally hosted models)
does not support batching. If you send in a batch of prompts, it just
processes them serially using the base implementation of _generate:
https://github.com/docugami/langchain/blob/master/libs/langchain/langchain/llms/base.py#L1004C2-L1004C29

This PR adds support for batching in this pipeline, so that GPUs can be
fully saturated. I updated the accompanying notebook to show GPU batch
inference.

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
1 year ago
Bagatur aa6e6db8c7
bump 301 (#11018) 1 year ago
Nuno Campos 956ee981c0
Fix issue where requests wrapper passes auth kwarg twice (#11010)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Closes #8842
1 year ago
Scotty 88a02076af
fix ChatMessageChunk concat error (#10174)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->

- Description: fix `ChatMessageChunk` concat error 
- Issue: #10173 
- Dependencies: None
- Tag maintainer: @baskaryan, @eyurtsev, @rlancemartin
- Twitter handle: None

---------

Co-authored-by: wangshuai.scotty <wangshuai.scotty@bytedance.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Naveen Tatikonda b0f21e2b50
[OpenSearch] Pass ids using from_texts and indexname in add_texts and search (#10969)
### Description
This PR makes the following changes to OpenSearch:
1. Pass optional ids with `from_texts`
2. Pass an optional index name with `add_texts` and `search` instead of
using the same index name that was used during `from_texts`

### Issue
https://github.com/langchain-ai/langchain/issues/10967

### Maintainers
@rlancemartin, @eyurtsev, @navneet1v

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
1 year ago
deanchanter f945426874
Resolve GHI 10674 (#10977) 1 year ago
Anar ff732e10f8
LLMRails Embedding (#10959)
LLMRails  Embedding Integration
This PR provides integration with LLMRails. Implemented here are:

langchain/embeddings/llm_rails.py
docs/extras/integrations/text_embedding/llm_rails.ipynb


Hi @hwchase17 after adding our vectorstore integration to langchain with
confirmation of you and @baskaryan, now we want to add our embedding
integration

---------

Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Michael Feil 94e31647bd
Support for Gradient.ai embedding (#10968)
Adds support for gradient.ai's embedding model.

This will remain a Draft, as the code will likely be refactored with the
`pip install gradientai` python sdk.
1 year ago
C.J. Jameson 05d5fcfdf8
fix make-coverage local invocation #10941 (#10974)
Fix the invocation of `make coverage` in `libs/langchain`

Fixes #10941
1 year ago
Bagatur 040d436b3f
Add vertex scheduled test (#10958) 1 year ago
Piyush Jain 8602a32b7e
Fixes error with providers that don't have model_id (#10966)
## Description
Fixes error with using the chain for providers that don't have
`model_id` field.


![image](https://github.com/langchain-ai/langchain/assets/289369/a86074cf-6c99-4390-a135-b3af7a4f0827)
1 year ago
Nuno Campos 7b13292e35
Remove python eval from vector sql db chain (#10937)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
1 year ago
Richard Wang b809c243af
Fix bug in `index` api (#10614)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

- **Description:** a fix for `index`.
- **Issue:** Not applicable.
- **Dependencies:** None
- **Tag maintainer:** 
- **Twitter handle:** richarddwang

# Problem
Replication code
```python
from pprint import pprint
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from langchain.vectorstores import Qdrant
from langchain_setup.qdrant import pprint_qdrant_documents, create_inmemory_empty_qdrant

# Documents
metadata1 = {"source": "fullhell.alchemist"}
doc1_1 = Document(page_content="1-1 I have a dog~", metadata=metadata1)
doc1_2 = Document(page_content="1-2 I have a daugter~", metadata=metadata1)
doc1_3 = Document(page_content="1-3 Ahh! O..Oniichan", metadata=metadata1)
doc2 = Document(page_content="2 Lancer died again.", metadata={"source": "fate.docx"})

# Create empty vectorstore
collection_name = "secret_of_D_disk"
vectorstore: Qdrant = create_inmemory_empty_qdrant()

# Create record Manager
import tempfile
from pathlib import Path

record_manager = SQLRecordManager(
    namespace="qdrant/{collection_name}",
    db_url=f"sqlite:///{Path(tempfile.gettempdir())/collection_name}.sql",
)
record_manager.create_schema()  # 必須

sync_result = index(
    [doc1_1, doc1_2, doc1_2, doc2],
    record_manager,
    vectorstore,
    cleanup="full",
    source_id_key="source",
)
print(sync_result, end="\n\n")
pprint_qdrant_documents(vectorstore)
```
<details>
<summary>Code of helper functions `pprint_qdrant_documents` and
`create_inmemory_empty_qdrant`</summary>

```python
def create_inmemory_empty_qdrant(**from_texts_kwargs):
    # Qdrant requires vector size, which can be only know after applying embedder
    vectorstore = Qdrant.from_texts(["dummy"], location=":memory:", embedding=OpenAIEmbeddings(), **from_texts_kwargs)
    dummy_document_id = vectorstore.client.scroll(vectorstore.collection_name)[0][0].id
    vectorstore.delete([dummy_document_id])
    return vectorstore

def pprint_qdrant_documents(vectorstore, limit: int = 100, **scroll_kwargs):
    document_ids, documents = [], []
    for record in vectorstore.client.scroll(
        vectorstore.collection_name, limit=100, **scroll_kwargs
    )[0]:
        document_ids.append(record.id)
        documents.append(
            Document(
                page_content=record.payload["page_content"],
                metadata=record.payload["metadata"] or {},
            )
        )
    pprint_documents(documents, document_ids=document_ids)

def pprint_document(document: Document = None, document_id=None, return_string=False):
    displayed_text = ""
    if document_id:
        displayed_text += f"Document {document_id}:\n\n"
    displayed_text += f"{document.page_content}\n\n"
    metadata_text = pformat(document.metadata, indent=1)
    if "\n" in metadata_text:
        displayed_text += f"Metadata:\n{metadata_text}"
    else:
        displayed_text += f"Metadata:{metadata_text}"

    if return_string:
        return displayed_text
    else:
        print(displayed_text)


def pprint_documents(documents, document_ids=None):
    if not document_ids:
        document_ids = [i + 1 for i in range(len(documents))]

    displayed_texts = []
    for document_id, document in zip(document_ids, documents):
        displayed_text = pprint_document(
            document_id=document_id, document=document, return_string=True
        )
        displayed_texts.append(displayed_text)
    print(f"\n{'-' * 100}\n".join(displayed_texts))
```
</details>
You will get

```
{'num_added': 3, 'num_updated': 0, 'num_skipped': 0, 'num_deleted': 0}

Document 1b19816e-b802-53c0-ad60-5ff9d9b9b911:

1-2 I have a daugter~

Metadata:{'source': 'fullhell.alchemist'}
----------------------------------------------------------------------------------------------------
Document 3362f9bc-991a-5dd5-b465-c564786ce19c:

1-1 I have a dog~

Metadata:{'source': 'fullhell.alchemist'}
----------------------------------------------------------------------------------------------------
Document a4d50169-2fda-5339-a196-249b5f54a0de:

1-2 I have a daugter~

Metadata:{'source': 'fullhell.alchemist'}
```
This is not correct. We should be able to expect that the vectorsotre
now includes doc1_1, doc1_2, and doc2, but not doc1_1, doc1_2, and
doc1_2.


# Reason
In `index`, the original code is 
```python
uids = []
docs_to_index = []
for doc, hashed_doc, doc_exists in zip(doc_batch, hashed_docs, exists_batch):
    if doc_exists:
        # Must be updated to refresh timestamp.
        record_manager.update([hashed_doc.uid], time_at_least=index_start_dt)
        num_skipped += 1
        continue
    uids.append(hashed_doc.uid)
    docs_to_index.append(doc)
```
In the aforementioned example, `len(doc_batch) == 4`, but
`len(hashed_docs) == len(exists_batch) == 3`. This is because the
deduplication of input documents [doc1_1, doc1_2, doc1_2, doc2] is
[doc1_1, doc1_2, doc2]. So `index` insert doc1_1, doc1_2, doc1_2 with
the uid of doc1_1, doc1_2, doc2.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Joshua Sundance Bailey d67b120a41
Make anthropic_api_key a secret str (#10724)
This PR makes `ChatAnthropic.anthropic_api_key` a `pydantic.SecretStr`
to avoid inadvertently exposing API keys when the `ChatAnthropic` object
is represented as a str.
1 year ago
Bagatur 1b65779905
fix integration tests (#10952) 1 year ago
Harrison Chase 9062e36722
Harrison/agents structured (#10911) 1 year ago
C.J. Jameson b4d2663beb
CONTRIBUTING.md Quick Start: focus on langchain core; clarify docs and experimental are separate (#10906)
follow up to https://github.com/langchain-ai/langchain/pull/7959 ,
explaining better to focus just on langchain core

no dependencies

twitter @cjcjameson
1 year ago
Michael Landis f30b4697d4
fix: broken link in libs/langchain README (#10920)
**Description**
Fixes broken link to `CONTRIBUTING.md` in `libs/langchain/README.md`.

Because`libs/langchain/README.md` was copied from the top level README,
and because the README contains a link to `.github/CONTRIBUTING.md`, the
copied README's link relative path must be updated. This commit fixes
that link.
1 year ago
Bagatur 3cb460d5d8
bump 300 (#10940) 1 year ago
Nuno Campos 3d5e92e3ef
Accept run name arg for non-chain runs (#10935) 1 year ago
Nuno Campos aac2d4dcef
In MergerRetriever async call all retrievers in parallel (#10938) 1 year ago
German Martin 66d5a7e7cf
Add async support to multi-query retriever. (#10873)
Added async support to the MultiQueryRetriever class.

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Leonid Kuligin 9d4b710a48
small fixes to Vertex (#10934)
Fixed tests, updated the required version of the SDK and a few minor
changes after the recent improvement
(https://github.com/langchain-ai/langchain/pull/10910)
1 year ago
wo0d 4e58b78102
Fix chat_history message order (#10869)
Not all databases uses id as default order, so add it explicitly

sqlite uses rawid as default order in select statement:
[https://www.sqlite.org/lang_createtable.html#rowid](https://www.sqlite.org/lang_createtable.html#rowid),
but some other databases like postgresql not behaves like this. since
this class supports multiple db engine. we should have an order.
1 year ago
Roman Shaptala 3d40de75c5
Fix default refine prompt template bug (#10928)
**Description:**
  
Default refine template does not actually use the refine template
defined above, it uses a string with the variable name.
 @baskaryan, @eyurtsev, @hwchase17
1 year ago
Bagatur cab55e9bc1
add vertex prod features (#10910)
- chat vertex async
- vertex stream
- vertex full generation info
- vertex use server-side stopping
- model garden async
- update docs for all the above

in follow up will add
[] chat vertex full generation info
[] chat vertex retries
[] scheduled tests
1 year ago
Bagatur dccc20b402
add model feat table (#10921) 1 year ago
William FH ee8653f62c
Wfh/allow nonparallel (#10914) 1 year ago
Leonid Kuligin 95e1d1fae6
fix in the docstring (#10902)
Description: A fix in the documentation on how to use
`GoogleSearchAPIWrapper`.
1 year ago
Bagatur af41bc84e6
bump 299 (#10904) 1 year ago
Bagatur 9a858a9107
Bagatur/arxiv kwargs (#10903)
support all arXiv api wrapper kwargs in loader
1 year ago
niklas e5f420d2bc
Fix typo in URL document loader example (#10585)
- **Description:** Fix typo in URL document loader example
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Tag maintainer:** not urgent
1 year ago
Nuno Campos ea26c12b23
Fix Runnable.transform() for false-y inputs (#10893)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Nuno Campos fcb5aba9f0
Add `Runnable.astream_log()` (#10374)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase a1ade48e8f
update agent docs (#10894) 1 year ago
Bagatur d37ce48e60
sep base url and loaded url in sub link extraction (#10895) 1 year ago
Bagatur 24cb5cd379
bump 298 (#10892) 1 year ago
Bagatur c1f9cc0bc5
recursive loader add status check (#10891) 1 year ago
Matvey Arye 6e02c45ca4
Add integration for Timescale Vector(Postgres) (#10650)
**Description:**
This commit adds a vector store for the Postgres-based vector database
(`TimescaleVector`).

Timescale Vector(https://www.timescale.com/ai) is PostgreSQL++ for AI
applications. It enables you to efficiently store and query billions of
vector embeddings in `PostgreSQL`:
- Enhances `pgvector` with faster and more accurate similarity search on
1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based
partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and
relational data.

Timescale Vector scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata,
vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade
feature liked streaming backups and replication, high-availability and
row-level security.
- Enables a worry-free experience with enterprise-grade security and
compliance.

Timescale Vector is available on Timescale, the cloud PostgreSQL
platform. (There is no self-hosted version at this time.) LangChain
users get a 90-day free trial for Timescale Vector.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Avthar Sewrathan <avthar@timescale.com>
1 year ago
Michael Feil 55570e54e1
gradient.ai LLM intregration (#10800)
- **Description:** This PR implements a new LLM API to
https://gradient.ai
- **Issue:** Feature request for LLM #10745 
- **Dependencies**: No additional dependencies are introduced. 
- **Tag maintainer:** I am opening this PR for visibility, once ready
for review I'll tag.

- ```make format && make lint && make test``` is running.
- added a `integration` and `mock unit` test.


Co-authored-by: michaelfeil <me@michaelfeil.eu>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 5097007407
cleanup recursive url session (#10863) 1 year ago
Harrison Chase 777b33b873
fix experimental imports (#10875) 1 year ago
Harrison Chase 808caca607
beef up agent docs (#10866) 1 year ago
Sharath Rajasekar 96023f94d9
Add Javelin integration (#10275)
We are introducing the py integration to Javelin AI Gateway
www.getjavelin.io. Javelin is an enterprise-scale fast llm router &
gateway. Could you please review and let us know if there is anything
missing.

Javelin AI Gateway wraps Embedding, Chat and Completion LLMs. Uses
javelin_sdk under the covers (pip install javelin_sdk).

Author: Sharath Rajasekar, Twitter: @sharathr, @javelinai

Thanks!!
1 year ago
Bagatur 957956ba6d
bump 297 (#10861) 1 year ago
Harrison Chase 1bc3244db9
fix loading of sql chain (#10860)
Closing #6889
1 year ago
Bagatur b05a74b106
fix recursive loader (#10856) 1 year ago
Bagatur de0a02f507
fix extract sublink bug (#10855) 1 year ago
Harrison Chase 7dec2d399b
format intermediate steps (#10794)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
1 year ago
Harrison Chase 386ef1e654
add agent output parsers (#10790) 1 year ago
Mukit Momin 67c5950df3
Amazon Bedrock Support Streaming (#10393)
### Description

- Add support for streaming with `Bedrock` LLM and `BedrockChat` Chat
Model.
- Bedrock as of now supports streaming for the `anthropic.claude-*` and
`amazon.titan-*` models only, hence support for those have been built.
- Also increased the default `max_token_to_sample` for Bedrock
`anthropic` model provider to `256` from `50` to keep in line with the
`Anthropic` defaults.
- Added examples for streaming responses to the bedrock example
notebooks.

**_NOTE:_**: This PR fixes the issues mentioned in #9897 and makes that
PR redundant.
1 year ago
Bagatur 0749a642f5
Stream refac and vertex streaming (#10470)
---------

Co-authored-by: Terry Cruz Melo <tcruz@vozy.co>
Co-authored-by: Terry Cruz Melo <33166112+TerryCM@users.noreply.github.com>
1 year ago
William FH f421af8b80
Criteria Parser Improvements (#10824) 1 year ago
Bagatur 46aa90062b
bump exp 19 (#10851) 1 year ago
Bagatur 775f3edffd
bump 296 (#10842) 1 year ago
Bagatur 96a9c27116
fix recursive loader (#10752)
maintain same base url throughout recursion, yield initial page, fixing
recursion depth tracking
1 year ago
Nuno Campos 276125a33b
Use shallow copy on runnable locals (#10825)
- deep copy prevents storing complex objects in locals
1 year ago
DanielZzz ebe08412ad
fix: chat_models Qianfan not compatiable with SystemMessage (#10642)
- **Description:** QianfanEndpoint bugs for SystemMessages. When the
`SystemMessage` is input as the messages to
`chat_models.QianfanEndpoint`. A `TypeError` will be raised.
  - **Issue:** #10643
  - **Dependencies:** 
  - **Tag maintainer:** @baskaryan
  - **Twitter handle:** no
1 year ago
Massimiliano Pronesti f0198354d9
fix(embeddings): number of texts in Azure OpenAIEmbeddings batch (#10707)
This PR addresses the limitation of Azure OpenAI embeddings, which can
handle at maximum 16 texts in a batch. This can be solved setting
`chunk_size=16`. However, I'd love to have this automated, not to force
the user to figure where the issue comes from and how to solve it.

Closes #4575. 

@baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
zhanghexian 0abe996409
add clustered vearch in langchain (#10771)
---------

Co-authored-by: zhanghexian1 <zhanghexian1@jd.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
HeTaoPKU f505320a73
Add Minimax chat model (#10776)
resolve the merging issues for
https://github.com/langchain-ai/langchain/pull/6757

---------

Co-authored-by: 何涛 <taohe@bytedance.com>
1 year ago
Anar c656a6b966
LLMRails (#10796)
### LLMRails Integration
This PR provides integration with LLMRails. Implemented here are:

langchain/vectorstore/llm_rails.py
tests/integration_tests/vectorstores/test_llm_rails.py
docs/extras/integrations/vectorstores/llm-rails.ipynb

---------

Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
mateai 900dbd1cbe
Substring support for similarity_search_with_score (#10746)
**Description:** Possible to filter with substrings in
similarity_search_with_score, for example: filter={'user_id':
{'substring': 'user'}}

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Ansil M B 740eafe41d
Updated return parameter of YouTubeSearchTool (#10743)
**Description:** 
changed return parameter of YouTubeSearchTool
 

1. changed the returning links of youtube videos by adding prefix
"https://www.youtube.com", now this will return the exact links to the
videos
2. updated the returning type from 'string' to 'list', which will be
more suited for further processings

 **Issue:** 
Fixes #10742

 **Dependencies:** 
None


<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** changed return parameter of YouTubeSearchTool
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** None
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Harrison Chase 1dae3c383e
Harrison/add submodule to docs (#10803) 1 year ago
Henry (Hezheng) Yin c15bbaac31
misc: add gpt-3.5-turbo-instruct to model_token_mapping (#10808)
A one-line fix to get`max_tokens=-1` working `OpenAI` class for
`gpt-3.5-turbo-instruct` model.

Closes https://github.com/langchain-ai/langchain/issues/10806
1 year ago
Harrison Chase d2bee34d4c
Harrison/add vald (#10807)
Co-authored-by: datelier <57349093+datelier@users.noreply.github.com>
1 year ago
Jacob Lee bbc3fe259b
Start RunnableBranch callback tags with 1 instead of 0 (#10755)
Changes to match `RunnableSequences`

@eyurtsev
1 year ago
Ziyang Liu 931b292126
Add support for HTTP PUT in the open api agent prompt (#10763)
**Description:** This PR adds HTTP PUT support for the langchain openapi
agent toolkit by leveraging existing structure and HTTP put request
wrapper. The PUT method is almost identical to HTTP POST but should be
idempotent and therefore tighter than POST which is not idempotent. Some
APIs may consider to use PUT instead of POST which is unfortunately not
supported with the current toolkit yet.
1 year ago
Mateusz Wosinski a29cd89923
Synthetic data generation (#9759)
### Description

Implements synthetic data generation with the fields and preferences
given by the user. Adds showcase notebook.
Corresponding prompt was proposed for langchain-hub.

### Example

```
output = chain({"fields": {"colors": ["blue", "yellow"]}, "preferences": {"style": "Make it in a style of a weather forecast."}})
print(output)

# {'fields': {'colors': ['blue', 'yellow']},
 'preferences': {'style': 'Make it in a style of a weather forecast.'},
 'text': "Good morning! Today's weather forecast brings a beautiful combination of colors to the sky, with hues of blue and yellow gently blending together like a mesmerizing painting."}
```

### Twitter handle 

@deepsense_ai @matt_wosinski

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur c4a6de3fc9
Revert "Add ChatGLM for llm and chat_model by using ChatGLM API (#9797)" (#10805)
@etveritas reverting for now until this is resolved
https://github.com/langchain-ai/langchain/pull/9797/files#r1330795585,
apologies for merging too eagerly!
1 year ago
Mickaël c86a1a6710
chore: allow using dataclasses_json dependency v0.6.0 (#10775)
**Description:** upgrade the `dataclasses_json` dependency to its latest
version ([no real breaking
change](https://github.com/lidatong/dataclasses-json/releases/tag/v0.6.0)
if used correctly), while allowing previous version to not break other
users' setup
**Issue:** I need to use the latest version of that dependency in my
project, but `langchain` prevents it.

Note: it looks like running `poetry lock --no-update` did some changes
to the lockfiles as it was the first time it was with the
`macosx_11_0_arm64` architecture 🤷

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Bagatur 76dd7480e6
Add batch_size param to Weaviate vector store (#9890)
cc @mcantillon21 @hsm207 @cs0lar
1 year ago
Mateusz Wosinski 720f6dbaac
Add XMLOutputParser (#10051)
**Description**
Adds new output parser, this time enabling the output of LLM to be of an
XML format. Seems to be particularly useful together with Claude model.
Addresses [issue
9820](https://github.com/langchain-ai/langchain/issues/9820).

**Twitter handle**
@deepsense_ai @matt_wosinski
1 year ago
etVERITAS d6df288380
Add ChatGLM for llm and chat_model by using ChatGLM API (#9797)
using sample:
```
endpoint_url = API URL
ChatGLM_llm = ChatGLM(
    endpoint_url=endpoint_url,
    api_key=Your API Key by ChatGLM
)
print(ChatGLM_llm("hello"))
```

```
model = ChatChatGLM(
    chatglm_api_key="api_key",
    chatglm_api_base="api_base_url",
    model_name="model_name"
)
chain = LLMChain(llm=model)
```
Description: The call of ChatGLM has been adapted.
Issue: The call of ChatGLM has been adapted.
Dependencies: Need python package `zhipuai` and `aiostream`
Tag maintainer: @baskaryan
Twitter handle: None

I remove the compatibility test for pydantic version 2, because pydantic
v2 can't not pickle classmethod,but BaseModel use @root_validator is a
classmethod decorator.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase d60145229b
make agent action serializable (#10797)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Maxime Bourliatoux 21b236e5e4
Fixing _InactiveRpcError in MatchingEngine vectorstore (#10056)
- Description: There was an issue with the MatchingEngine VectorStore,
preventing from using it with a public endpoint. In the Google Cloud
library there are two similar methods for private or public endpoints :
`match()` and `find_neighbors()`.
  - Issue: Fixes #8378 
- This uses the `google.cloud.aiplatform` library :
https://github.com/googleapis/python-aiplatform/blob/main/google/cloud/aiplatform/matching_engine/matching_engine_index_endpoint.py
1 year ago
Sam Chou 4f19ba3065
Azure Search: Remove select field restrictions and expand metadata to other fields, also expose kwargs to searches (#9894)
Description: 
If metadata field returned in results, previous behavior unchanged. If
metadata field does not exist in results, expand metadata to any fields
returned outside of content field.

There's precedence for this as well, see the retriever:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/azure_cognitive_search.py#L96C46-L96C46

Issue: 
#9765 - Ameliorates hard-coding in case you already indexed to cognitive
search without a metadata field but rather placed metadata in separate
fields.

@hwchase17
1 year ago
Piyush Jain 94cf71ecfa
Updated Neptune graph to use boto (#10121)
## Description
This PR updates the `NeptuneGraph` class to start using the boto API for
connecting to the Neptune service. With boto integration, the graph
class now supports authenticating requests using Sigv4; this is
encapsulated with the boto API, and users only have to ensure they have
the correct AWS credentials setup in their workspace to work with the
graph class.

This PR also introduces a conditional prompt that uses a simpler prompt
when using the `Anthropic` model provider. A simpler prompt have seemed
to work better for generating cypher queries in our testing.

**Note**: This version will require boto3 version 1.28.38 or greater to
work.
1 year ago
Douglas Monsky d5f1969d55
Introducing Enhanced Functionality to WeaviateHybridSearchRetriever: Accepting Additional Keyword Arguments (#10802)
**Description:** 
This commit enriches the `WeaviateHybridSearchRetriever` class by
introducing a new parameter, `hybrid_search_kwargs`, within the
`_get_relevant_documents` method. This parameter accommodates arbitrary
keyword arguments (`**kwargs`) which can be channeled to the inherited
public method, `get_relevant_documents`, originating from the
`BaseRetriever` class.

This modification facilitates more intricate querying capabilities,
allowing users to convey supplementary arguments to the `.with_hybrid()`
method. This expansion not only makes it possible to perform a more
nuanced search targeting specific properties but also grants the ability
to boost the weight of searched properties, to carry out a search with a
custom vector, and to apply the Fusion ranking method. The documentation
has been updated accordingly to delineate these new possibilities in
detail.

In light of the layered approach in which this search operates,
initiating with `query.get()` and then transitioning to
`.with_hybrid()`, several advantageous opportunities are unlocked for
the hybrid component that were previously unattainable.

Here’s a representative example showcasing a query structure that was
formerly unfeasible:

[Specific Properties
Only](https://weaviate.io/developers/weaviate/search/hybrid#selected-properties-only)
"The example below illustrates a BM25 search targeting the keyword
'food' exclusively within the 'question' property, integrated with
vector search results corresponding to 'food'."
```python
response = (
    client.query
    .get("JeopardyQuestion", ["question", "answer"])
    .with_hybrid(
        query="food",
        properties=["question"], # Will now be possible moving forward
        alpha=0.25
    )
    .with_limit(3)
    .do()
)
```
This functionality is now accessible through my alterations, by
conveying `hybrid_search_kwargs={"properties": ["question", "answer"]}`
as an argument to
`WeaviateHybridSearchRetriever.get_relevant_documents()`. For example:

```python
import os
from weaviate import Client
from langchain.retrievers import WeaviateHybridSearchRetriever

client = Client(
        url=os.getenv("WEAVIATE_CLIENT_URL"),
        additional_headers={
            "X-OpenAI-Api-Key": os.getenv("OPENAI_API_KEY"),
            "Authorization": f"Bearer {os.getenv('WEAVIATE_API_KEY')}",
        },
    )

index_name = "Document"
text_key = "content"
attributes = ["title", "summary", "header", "url"]

retriever = ExtendedWeaviateHybridSearchRetriever(
        client=client,
        index_name=index_name,
        text_key=text_key,
        attributes=attributes,
    )

# Warning: to utilize properties in this way, each use property must also be in the list `attributes + [text_key]`.
hybrid_search_kwargs = {"properties": ["summary^2", "content"]}
query_text = "Some Query Text"

relevant_docs = retriever.get_relevant_documents(
        query=query_text,
        hybrid_search_kwargs=hybrid_search_kwargs
    )
```
In my experience working with the `weaviate-client` library, I have
found that these supplementary options stand as vital tools for
refining/finetuning searches, notably within multifaceted datasets. As a
final note, this implementation supports both backwards and forward
(within reason) compatiblity. It accommodates any future additional
parameters Weaviate may add to `.with_hybrid()`, without necessitating
further alterations.

**Additional Documentation:**
For a more comprehensive understanding and to explore a myriad of useful
options that are now accessible, please refer to the Weaviate
documentation:
- [Fusion Ranking
Method](https://weaviate.io/developers/weaviate/search/hybrid#fusion-ranking-method)
- [Selected Properties
Only](https://weaviate.io/developers/weaviate/search/hybrid#selected-properties-only)
- [Weight Boost Searched
Properties](https://weaviate.io/developers/weaviate/search/hybrid#weight-boost-searched-properties)
- [With a Custom
Vector](https://weaviate.io/developers/weaviate/search/hybrid#with-a-custom-vector)

**Tag Maintainer:** 
@hwchase17 - I have tagged you based on your frequent contributions to
the pertinent file, `/retrievers/weaviate_hybrid_search.py`. My
apologies if this was not the appropriate choice.

Thank you for considering my contribution, I look forward to your
feedback, and to future collaboration.
1 year ago
Jacob Lee 61cecf8b1b
Fix for versioned OpenAI instruct models (#10788)
Versioned OpenAI instruct models may end with numbers, e.g.
`gpt-3.5-turbo-instruct-0914`.

Fixes https://github.com/langchain-ai/langchainjs/issues/2669 in Python
1 year ago
Cory Zue 62603f2664
make auto-setting the encodings optional, alow explicitly setting it (#10774)
I was trying to use web loaders on some spanish documentation (e.g.
[this site](https://www.fromdoppler.com/es/mailing-tendencias/), but the
auto-encoding introduced in
https://github.com/langchain-ai/langchain/pull/3602 was detected as
"MacRoman" instead of the (correct) "UTF-8".

To address this, I've added the ability to disable the auto-encoding, as
well as the ability to explicitly tell the loader what encoding to use.

- **Description:** Makes auto-setting the encoding optional in
`WebBaseLoader`, and introduces an `encoding` option to explicitly set
it.
  - **Dependencies:** N/A
  - **Tag maintainer:** @hwchase17 
  - **Twitter handle:** @czue
1 year ago
Harrison Chase c68be4eb2b
tool rendering (#10786) 1 year ago
Aashish Saini 1b050b98f5
Corrected some spelling mistakes and grammatical errors (#10791)
Corrected some spelling mistakes and grammatical errors
CC: @baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Ishita Chauhan <136303787+IshitaChauhanShortHillsAI@users.noreply.github.com>
Co-authored-by: Aashish Saini <141953346+AashishSainiShorthillsAI@users.noreply.github.com>
Co-authored-by: ManpreetShorthillsAI <142380984+ManpreetShorthillsAI@users.noreply.github.com>
Co-authored-by: AryamanJaiswalShorthillsAI <142397527+AryamanJaiswalShorthillsAI@users.noreply.github.com>
Co-authored-by: Adarsh Shrivastav <142413097+AdarshKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: Vishal <141389263+VishalYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: ChetnaGuptaShorthillsAI <142381084+ChetnaGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: PankajKumarShorthillsAI <142473460+PankajKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: AbhishekYadavShorthillsAI <142393903+AbhishekYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: AmitSinghShorthillsAI <142410046+AmitSinghShorthillsAI@users.noreply.github.com>
Co-authored-by: Md Nazish Arman <142379599+MdNazishArmanShorthillsAI@users.noreply.github.com>
Co-authored-by: KamalSharmaShorthillsAI <142474019+KamalSharmaShorthillsAI@users.noreply.github.com>
Co-authored-by: Lakshya <lakshyagupta87@yahoo.com>
Co-authored-by: Aayush <142384656+AayushShorthillsAI@users.noreply.github.com>
Co-authored-by: AnujMauryaShorthillsAI <142393269+AnujMauryaShorthillsAI@users.noreply.github.com>
Co-authored-by: ishita <chauhanishita5356@gmail.com>
1 year ago
Ahmad Bunni 5272e42b0d
Add namespace to pinecone hybrid search (#10677)
**Description:** 
  
Pinecone hybrid search is now limited to default namespace. There is no
option for the user to provide a namespace to partition an index, which
is one of the most important features of pinecone.
  
**Resource:** 
https://docs.pinecone.io/docs/namespaces

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Bagatur 0d1550da91
Bagatur/bump 295 (#10785) 1 year ago
Vikram Shitole a4e858b111
Sagemaker endpoint capability to inject boto3 client for cross account scenarios (#10728)
- **Description: Allow to inject boto3 client for Cross account access
type of scenarios in using Sagemaker Endpoint **
  - **Issue:#10634 #10184** 
  - **Dependencies: None** 
  - **Tag maintainer:** 
  - **Twitter handle:lethargicoder**

Co-authored-by: Vikram(VS) <vssht@amazon.com>
1 year ago
William FH c8f386db97
Merge metadata + tags in config (#10762)
Think these should be a merge/update rather than overwrite
1 year ago
BarberAlec c898a4d7ba
Update ContextCallbackHandler Docstring & metadata key (#10732)
- **Description:** Updating URL in Context Callback Docstrings and
update metadata key Context CallbackHandler uses to send model names.
- **Issue:** The URL in ContextCallbackHandler is out of date. Model
data being sent to Context should be under the "model" key and not
"llm_model". This allows Context to do more sophisticated analysis.
  - **Dependencies:** None

Tagging @agamble.
1 year ago
Harrison Chase 8b68d1a03b
keep reference to old embeddings base (#10759) 1 year ago
Jacob Lee babf46692d
Allow extra variables when invoking prompt templates (#10765)
Makes chaining easier as many maps have extra properties.

@baskaryan @hwchase17
1 year ago
Bagatur 8515e27d82
bump 294 (#10751) 1 year ago
Jacob Lee 579d14fbc1
Allow 3.5-turbo instruct models in the OpenAI LLM class (#10750)
@baskaryan @hwchase17
1 year ago
Harrison Chase e404fd39dd
add anthropic page (#10666) 1 year ago
Bagatur 5072138893
bump 293 (#10740) 1 year ago
Harrison Chase 12ff780089
move embeddings to schema (#10696) 1 year ago
Jiayi Ni ce61840e3b
ENH: Add `llm_kwargs` for Xinference LLMs (#10354)
- This pr adds `llm_kwargs` to the initialization of Xinference LLMs
(integrated in #8171 ).
- With this enhancement, users can not only provide `generate_configs`
when calling the llms for generation but also during the initialization
process. This allows users to include custom configurations when
utilizing LangChain features like LLMChain.
- It also fixes some format issues for the docstrings.
1 year ago
Eugene Yurtsev 1eefb9052b
RunnableBranch (#10594)
Runnable Branch implementation, no optimization for streaming logic yet
1 year ago
William FH 287c81db89
Catch Base Exception (#10607)
Currently the on_*_error isn't called for CancellationError's. This is
because in python 3.8, the inheritance changed from Exception to
BaseException


https://docs.python.org/3/library/asyncio-exceptions.html#asyncio.CancelledError
1 year ago
Philippe PRADOS 39c1c94272
Fix typing in WebResearchRetriver (#10734)
Hello @hwchase17 

**Issue**:
The class WebResearchRetriever accept only
RecursiveCharacterTextSplitter, but never uses a specification of this
class. I propose to change the type to TextSplitter. Then, the lint can
accept all subtypes.
1 year ago
Nuno Campos 8201cae770
Bug fixes for runnables (#10738)
- tools invoked in async methods would not work due to missing await
- RunnableSequence.stream() was creating an extra root run by mistake,
and it can simplified due to existence of default implementation for
.transform()

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
1 year ago
William FH 6e48092746
Update LangSmith Version (#10722)
And assign dataset ID upon project creation
1 year ago
William FH a3e5507faa
Make eval output parsers more robust (#10658)
Ran through a few hundred generations with some models to fix up the
parsers
1 year ago
William FH c5078fb13c
Add support for showing IO to chain group (#10510)
As well as error propagation
1 year ago
Harrison Chase 2c957de2fc
add checks on basic base modules (#10693) 1 year ago
Harrison Chase 5442d2b1fa
Harrison/stop importing from init (#10690) 1 year ago
Hedeer El Showk 9749f8ebae
database -> db in from_llm (#10667)
**Description:** Renamed argument `database` in
`SQLDatabaseSequentialChain.from_llm()` to `db`,

I realize it's tiny and a bit of a nitpick but for consistency with
SQLDatabaseChain (and all the others actually) I thought it should be
renamed. Also got me while working and using it today.

✔️ Please make sure your PR is passing linting and
testing before submitting. Run `make format`, `make lint` and `make
test` to check this locally.
1 year ago
Joshua Sundance Bailey c4e591a57d
OpenAI function calling docstring and notebook imports (#10663)
This PR is a documentation fix.

Description:
* fixes imports in the code samples in the docstrings of
`create_openai_fn_chain` and `create_structured_output_chain`
* fixes imports in
`docs/extras/modules/chains/how_to/openai_functions.ipynb`
* removes unused imports from the notebook

Issues:
* the docstrings use `from pydantic_v1 import BaseModel, Field` which
this PR changes to `from langchain.pydantic_v1 import BaseModel, Field`
* importing `pydantic` instead of `langchain.pydantic_v1` leads to
errors later in the notebook
1 year ago
Nuno Campos 9cd131a178
Support kwargs in RunnableWithFallbacks (#10682)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
1 year ago
Bagatur 6831a25675
bump 292 (#10649) 1 year ago
Nuno Campos 029b2f6aac
Allow calls to batch() with 0 length arrays (#10627)
This can happen if eg the input to batch is a list generated dynamically, where a 0-length list might be a valid use case
1 year ago
Jacob Lee a50e62e44b
Adds transform and atransform support to runnable sequences (#9583)
Allow runnable sequences to support transform if each individual
runnable inside supports transform/atransform.

@nfcampos
1 year ago
Aashish Saini f9f1340208
Fixed some grammatical and spelling errors (#10595)
Fixed some grammatical and spelling errors
1 year ago
Ackermann Yuriy 5e50b89164
Added embeddings support for ollama (#10124)
- Description: Added support for Ollama embeddings
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: N/A
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
  - Twitter handle: @herrjemand

cc  https://github.com/jmorganca/ollama/issues/436
1 year ago
Bagatur bc6b9331a9
bump 291 (#10604) 1 year ago
Bagatur ecbb1ed8cb
Replicate params fix (#10603) 1 year ago
Bagatur 50bb704da5
bump 290 (#10602) 1 year ago
Bagatur e195b78e1d
Fix replicate model kwargs (#10599) 1 year ago
Bagatur 77a165e0d9
fix replicate output type (#10598) 1 year ago
Bagatur 0786395b56
bump 289 (#10586)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
1 year ago
Bagatur 9dd4cacae2
add replicate stream (#10518)
support direct replicate streaming. cc @cbh123 @tjaffri
1 year ago
Bagatur 7f3f6097e7
Add mmr support to redis retriever (#10556) 1 year ago
Bagatur ccf71e23e8
cache replicate version (#10517)
In subsequent pr will update _call to use replicate.run directly when
not streaming, so version object isn't needed at all

cc @cbh123 @tjaffri
1 year ago
Stefano Lottini 49b65a1b57
CassandraCache and CassandraSemanticCache can handle any "Generation" (#10563)
Hello,
this PR improves coverage for caching by the two Cassandra-related
caches (i.e. exact-match and semantic alike) by switching to the more
general `dumps`/`loads` serdes utilities.

This enables cache usage within e.g. `ChatOpenAI` contexts (which need
to store lists of `ChatGeneration` instead of `Generation`s), which was
not possible as long as the cache classes were relying on the legacy
`_dump_generations_to_json` and `_load_generations_from_json`).

Additionally, a slightly different init signature is introduced for the
cache objects:
- named parameters required for init, to pave the way for easier changes
in the future connect-to-db flow (and tests adjusted accordingly)
- added a `skip_provisioning` optional passthrough parameter for use
cases where the user knows the underlying DB table, etc already exist.

Thank you for a review!
1 year ago
Tomaz Bratanic e1e01d6586
Add Neo4j vector index hybrid search (#10442)
Adding support for Neo4j vector index hybrid search option. In Neo4j,
you can achieve hybrid search by using a combination of vector and
fulltext indexes.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
William FH 596f294b01
Update LangSmith Walkthrough (#10564) 1 year ago
stonekim adabdfdfc7
Add Baidu Qianfan endpoint for LLM (#10496)
- Description:
* Baidu AI Cloud's [Qianfan
Platform](https://cloud.baidu.com/doc/WENXINWORKSHOP/index.html) is an
all-in-one platform for large model development and service deployment,
catering to enterprise developers in China. Qianfan Platform offers a
wide range of resources, including the Wenxin Yiyan model (ERNIE-Bot)
and various third-party open-source models.
- Issue: none
- Dependencies: 
    * qianfan
- Tag maintainer: @baskaryan
- Twitter handle:

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Sergey Kozlov 0a0276bcdb
Fix OpenAIFunctionsAgent function call message content retrieving (#10488)
`langchain.agents.openai_functions[_multi]_agent._parse_ai_message()`
incorrectly extracts AI message content, thus LLM response ("thoughts")
is lost and can't be logged or processed by callbacks.

This PR fixes function call message content retrieving.
1 year ago
Michael Kim 2dc3c64386
Adding headers for accessing pdf file url (#10370)
- Description: Set up 'file_headers' params for accessing pdf file url
  - Tag maintainer: @hwchase17 

 make format, make lint, make test

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Renze Yu a34510536d
Improve code example indent (#10490) 1 year ago
Ali Soliman bcf130c07c
Fix Import BedrockChat (#10485)
- Description: Couldn't import BedrockChat from the chat_models
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: N/A
  - Issues: #10468

---------

Co-authored-by: Ali Soliman <alisaws@amazon.nl>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Stefano Lottini 415d38ae62
Cassandra Vector Store, add metadata filtering + improvements (#9280)
This PR addresses a few minor issues with the Cassandra vector store
implementation and extends the store to support Metadata search.

Thanks to the latest cassIO library (>=0.1.0), metadata filtering is
available in the store.

Further,
- the "relevance" score is prevented from being flipped in the [0,1]
interval, thus ensuring that 1 corresponds to the closest vector (this
is related to how the underlying cassIO class returns the cosine
difference);
- bumped the cassIO package version both in the notebooks and the
pyproject.toml;
- adjusted the textfile location for the vector-store example after the
reshuffling of the Langchain repo dir structure;
- added demonstration of metadata filtering in the Cassandra vector
store notebook;
- better docstring for the Cassandra vector store class;
- fixed test flakiness and removed offending out-of-place escape chars
from a test module docstring;

To my knowledge all relevant tests pass and mypy+black+ruff don't
complain. (mypy gives unrelated errors in other modules, which clearly
don't depend on the content of this PR).

Thank you!
Stefano

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 49694f6a3f
explicitly check openllm return type (#10560)
cc @aarnphm
1 year ago
Joshua Sundance Bailey 85e05fa5d6
ArcGISLoader: add keyword arguments, error handling, and better tests (#10558)
* More clarity around how geometry is handled. Not returned by default;
when returned, stored in metadata. This is because it's usually a waste
of tokens, but it should be accessible if needed.
* User can supply layer description to avoid errors when layer
properties are inaccessible due to passthrough access.
* Enhanced testing
* Updated notebook

---------

Co-authored-by: Connor Sutton <connor.sutton@swca.com>
Co-authored-by: connorsutton <135151649+connorsutton@users.noreply.github.com>
1 year ago
Aaron Pham ac9609f58f
fix: unify generation outputs on newer openllm release (#10523)
update newer generation format from OpenLLm where it returns a
dictionary for one shot generation

cc @baskaryan 

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>

---------

Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
1 year ago
Aashish Saini 201b61d5b3
Fixed Import Error type in base.py (#10209)
I have revamped the code to ensure uniform error handling for
ImportError. Instead of the previous reliance on ValueError, I have
adopted the conventional practice of raising ImportError and providing
informative error messages. This change enhances code clarity and
clearly signifies that any problems are associated with module imports.
1 year ago
volodymyr-memsql a43abf24e4
Fix SingleStoreDB (#10534)
After the refactoring #6570, the DistanceStrategy class was moved to
another module and this introduced a bug into the SingleStoreDB vector
store, as the `DistanceStrategy.EUCLEDIAN_DISTANCE` started to convert
into the 'DistanceStrategy.EUCLEDIAN_DISTANCE' string, instead of just
'EUCLEDIAN_DISTANCE' (same for 'DOT_PRODUCT').

In this change, I check the type of the parameter and use `.name`
attribute to get the correct object's name.

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
1 year ago
Tom Piaggio d1f2075bde
Fix `GoogleEnterpriseSearchRetriever` (#10546)
Replace this entire comment with:
- Description: fixed Google Enterprise Search Retriever where it was
consistently returning empty results,
- Issue: related to [issue
8219](https://github.com/langchain-ai/langchain/issues/8219),
  - Dependencies: no dependencies,
  - Tag maintainer: @hwchase17 ,
  - Twitter handle: [Tomas Piaggio](https://twitter.com/TomasPiaggio)!
1 year ago
berkedilekoglu 73b9ca54cb
Using batches for update document with a new function in ChromaDB (#6561)
2a4b32dee2/langchain/vectorstores/chroma.py (L355-L375)

Currently, the defined update_document function only takes a single
document and its ID for updating. However, Chroma can update multiple
documents by taking a list of IDs and documents for batch updates. If we
update 'update_document' function both document_id and document can be
`Union[str, List[str]]` but we need to do type check. Because
embed_documents and update functions takes List for text and
document_ids variables. I believe that, writing a new function is the
best option.

I update the Chroma vectorstore with refreshed information from my
website every 20 minutes. Updating the update_document function to
perform simultaneous updates for each changed piece of information would
significantly reduce the update time in such use cases.

For my case I update a total of 8810 chunks. Updating these 8810
individual chunks using the current function takes a total of 8.5
minutes. However, if we process the inputs in batches and update them
collectively, all 8810 separate chunks can be updated in just 1 minute.
This significantly reduces the time it takes for users of actively used
chatbots to access up-to-date information.

I can add an integration test and an example for the documentation for
the new update_document_batch function.

@hwchase17 

[berkedilekoglu](https://twitter.com/berkedilekoglu)
1 year ago
Bagatur 1835624bad
bump 288 (#10548) 1 year ago
Bagatur 303724980c
Add ElevenLabs text to speech tool (#10525) 1 year ago
Bagatur 79a567d885 Refactor elevenlabs tool 1 year ago
Bagatur 97122fb577
Integration with ElevenLabs text to speech (#10181)
- Description: adds integration with ElevenLabs text-to-speech
[component](https://github.com/elevenlabs/elevenlabs-python) in the
similar way it has been already done for [azure cognitive
services](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/tools/azure_cognitive_services/text2speech.py)
  - Dependencies: elevenlabs
  - Twitter handle: @deepsense_ai, @matt_wosinski
- Future plans: refactor both implementations in order to avoid dumping
speech file, but rather to keep it in memory.
1 year ago
Bagatur 7ecee7821a Replicate fix linting 1 year ago
Taqi Jaffri 21fbbe83a7
Fix fine-tuned replicate models with faster cold boot (#10512)
With the latest support for faster cold boot in replicate
https://replicate.com/blog/fine-tune-cold-boots it looks like the
replicate LLM support in langchain is broken since some internal
replicate inputs are being returned.

Screenshot below illustrates the problem:

<img width="1917" alt="image"
src="https://github.com/langchain-ai/langchain/assets/749277/d28c27cc-40fb-4258-8710-844c00d3c2b0">

As you can see, the new replicate_weights param is being sent down with
x-order = 0 (which is causing langchain to use that param instead of
prompt which is x-order = 1)

FYI @baskaryan this requires a fix otherwise replicate is broken for
these models. I have pinged replicate whether they want to fix it on
their end by changing the x-order returned by them.

Update: per suggestion I updated the PR to just allow manually setting
the prompt_key which can be set to "prompt" in this case by callers... I
think this is going to be faster anyway than trying to dynamically query
the model every time if you know the prompt key for your model.

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
1 year ago
William FH 57e2de2077
add avg feedback (#10509)
in run_on_dataset agg feedback printout
1 year ago
Bagatur f7f3c02585
bump 287 (#10498) 1 year ago
Bagatur 6598178343
Chat model stream readability nit (#10469) 1 year ago
Riyadh Rahman d45b042d3e
Added gitlab toolkit and notebook (#10384)
### Description

Adds Gitlab toolkit functionality for agent

### Twitter handle

@_laplaceon

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Nante Nantero 41047fe4c3
fix(DynamoDBChatMessageHistory): correct delete_item method call (#10383)
**Description**: 
Fixed a bug introduced in version 0.0.281 in
`DynamoDBChatMessageHistory` where `self.table.delete_item(self.key)`
produced a TypeError: `TypeError: delete_item() only accepts keyword
arguments`. Updated the method call to
`self.table.delete_item(Key=self.key)` to resolve this issue.

Please see also [the official AWS
documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb/table/delete_item.html#)
on this **delete_item** method - only `**kwargs` are accepted.

See also the PR, which introduced this bug:
https://github.com/langchain-ai/langchain/pull/9896#discussion_r1317899073

Please merge this, I rely on this delete dynamodb item functionality
(because of GDPR considerations).

**Dependencies**: 
None

**Tag maintainer**: 
@hwchase17 @joshualwhite 

**Twitter handle**: 
[@BenjaminLinnik](https://twitter.com/BenjaminLinnik)
Co-authored-by: Benjamin Linnik <Benjamin@Linnik-IT.de>
1 year ago
Pavel Filatov 30c9d97dda
Remove HuggingFaceDatasetLoader duplicate entry (#10394) 1 year ago
fyasla 55196742be
Fix of issue: (#10421)
DOC: Inversion of 'True' and 'False' in ConversationTokenBufferMemory
Property Comments #10420
1 year ago
John Mai b50d724114
Supported custom ernie_api_base for Ernie (#10416)
Description: Supported custom ernie_api_base for Ernie
 - ernie_api_base:Support Ernie custom endpoints
 - Rectifying omitted code modifications. #10398

Issue: None
Dependencies: None
Tag maintainer: @baskaryan 
Twitter handle: @JohnMai95
1 year ago
James Barney 50128c8b39
Adding File-Like object support in CSV Agent Toolkit (#10409)
If loading a CSV from a direct or temporary source, loading the
file-like object (subclass of IOBase) directly allows the agent creation
process to succeed, instead of throwing a ValueError.

Added an additional elif and tweaked value error message.
Added test to validate this functionality.

Pandas from_csv supports this natively but this current implementation
only accepts strings or paths to files.
https://pandas.pydata.org/docs/user_guide/io.html#io-read-csv-table

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 999163fbd6
Add HF prompt injection detection (#10464) 1 year ago
Bagatur 0f81b3dd2f HF Injection Identifier Refactor 1 year ago