Commit Graph

901 Commits (71025013f85e1954abd2bd340d46728c177b7269)

Author SHA1 Message Date
Nikhil Suresh 8a4670e127 updated formatting changes 1 year ago
Nikhil Suresh b1f649bca5 fixed issue with white space and added unit tests 1 year ago
Nikhil Suresh 6d3485e798 fixed regex to match sources for all cases, also includes source 1 year ago
Predrag Gruevski 47499c6db4
Avoid `type: ignore` suppression by adding mypy type hint. (#9881)
Mypy was not able to determine a good type for `type_to_loader_dict`,
since the values in the dict are functions whose return types are
related to each other in a complex way. One can see this by adding a
line like `reveal_type(type_to_loader_dict)` and running mypy, which
will get mypy to show what type it has inferred for that value.

Adding an explicit type hint to help out mypy avoids the need for a mypy
suppression and allows the code to type-check cleanly.
1 year ago
maks-operlejn-ds f327535eda
Add conftest file to langchain experimental (#9886)
In order to use `requires` marker in langchain-experimental, there's a
need for *conftest.py* file inside. Everything is identical to the main
langchain module.

Co-authored-by: maks-operlejn-ds <maks.operlejn@gmail.com>
1 year ago
William FH 907c57e324
Add collect_runs callback (#9885) 1 year ago
William FH 3103f07e03
Use existing required args obj if specified (#9883)
We always overwrote the required args but we infer them by default.
Doing it only the old way makes it so the llm guesses even if an arg is
optional (e.g., for uuids)
1 year ago
William FH b14d74dd4d
iMessage loader (#9832)
Add an iMessage chat loader
1 year ago
Predrag Gruevski eb3d1fa93c
Add security warning to experimental `SQLDatabaseChain` class. (#9867)
The most reliable way to not have a chain run an undesirable SQL command
is to not give it database permissions to run that command. That way the
database itself performs the rule enforcement, so it's much easier to
configure and use properly than anything we could add in ourselves.
1 year ago
hughcrt 97741d41c5 Add LLMonitorCallbackHandler 1 year ago
eryk-dsai 7f5713b80a
feat: grammar-based sampling in llama-cpp (#9712)
## Description 

The following PR enables the [grammar-based
sampling](https://github.com/ggerganov/llama.cpp/tree/master/grammars)
in llama-cpp LLM.

In short, loading file with formal grammar definition will constrain
model outputs. For instance, one can force the model to generate valid
JSON or generate only python lists.

In the follow-up PR we will add:
* docs with some description why it is cool and how it works
* maybe some code sample for some task such as in llama repo

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
William FH cb642ef658
Return feedback (#9629)
Return the feedback values in an eval run result

Also made a helper method to display as a dataframe but it may be
overkill
1 year ago
Bagatur 5e2d0cf54e
bump 275 (#9860) 1 year ago
Leonid Kuligin 00baddf34c fixed enterprise search returning an empty array 1 year ago
Eugene Yurtsev 5edf819524
Qdrant Client: Expose instance for creating client (#9706)
Expose classmethods to convenient initialize the vectostore.

The purpose of this PR is to make it easy for users to initialize an
empty vectorstore that's properly pre-configured without having to index
documents into it via `from_documents`.

This will make it easier for users to rely on the following indexing
code: https://github.com/langchain-ai/langchain/pull/9614
to help manage data in the qdrant vectorstore.
1 year ago
Harrison Chase 610f46d83a
accept openai terms (#9826) 1 year ago
Harrison Chase c1badc1fa2
add gmail loader (#9810) 1 year ago
Bagatur 0d01cede03
bump 274 (#9805) 1 year ago
Nikhil Suresh 0da5803f5a
fixed regex to match sources for all cases, also includes source (#9775)
- Description: Updated the regex to handle all the different cases for
string matching (SOURCES, sources, Sources),
  - Issue: https://github.com/langchain-ai/langchain/issues/9774
  - Dependencies: N/A
1 year ago
Sam Partee a28eea5767
Redis metadata filtering and specification, index customization (#8612)
### Description

The previous Redis implementation did not allow for the user to specify
the index configuration (i.e. changing the underlying algorithm) or add
additional metadata to use for querying (i.e. hybrid or "filtered"
search).

This PR introduces the ability to specify custom index attributes and
metadata attributes as well as use that metadata in filtered queries.
Overall, more structure was introduced to the Redis implementation that
should allow for easier maintainability moving forward.

# New Features

The following features are now available with the Redis integration into
Langchain

## Index schema generation

The schema for the index will now be automatically generated if not
specified by the user. For example, the data above has the multiple
metadata categories. The the following example

```python

from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.redis import Redis

embeddings = OpenAIEmbeddings()


rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users"
)
```

Loading the data in through this and the other ``from_documents`` and
``from_texts`` methods will now generate index schema in Redis like the
following.

view index schema with the ``redisvl`` tool. [link](redisvl.com)

```bash
$ rvl index info -i users
```


Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|---------------|-----------------|------------|
| users | HASH | ['doc:users'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |


### Custom Metadata specification

The metadata schema generation has the following rules
1. All text fields are indexed as text fields.
2. All numeric fields are index as numeric fields.

If you would like to have a text field as a tag field, users can specify
overrides like the following for the example data

```python

# this can also be a path to a yaml file
index_schema = {
    "text": [{"name": "user"}, {"name": "job"}],
    "tag": [{"name": "credit_score"}],
    "numeric": [{"name": "age"}],
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users"
)
```
This will change the index specification to 

Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|----------------|-----------------|------------|
| users2 | HASH | ['doc:users2'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |


and throw a warning to the user (log output) that the generated schema
does not match the specified schema.

```text
index_schema does not match generated schema from metadata.
index_schema: {'text': [{'name': 'user'}, {'name': 'job'}], 'tag': [{'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
```

As long as this is on purpose,  this is fine.

The schema can be defined as a yaml file or a dictionary

```yaml

text:
  - name: user
  - name: job
tag:
  - name: credit_score
numeric:
  - name: age

```

and you pass in a path like

```python
rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    index_schema=Path("sample1.yml").resolve()
)
```

Which will create the same schema as defined in the dictionary example


Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|----------------|-----------------|------------|
| users3 | HASH | ['doc:users3'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |



### Custom Vector Indexing Schema

Users with large use cases may want to change how they formulate the
vector index created by Langchain

To utilize all the features of Redis for vector database use cases like
this, you can now do the following to pass in index attribute modifiers
like changing the indexing algorithm to HNSW.

```python
vector_schema = {
    "algorithm": "HNSW"
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    vector_schema=vector_schema
)

```

A more complex example may look like

```python
vector_schema = {
    "algorithm": "HNSW",
    "ef_construction": 200,
    "ef_runtime": 20
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    vector_schema=vector_schema
)
```

All names correspond to the arguments you would set if using Redis-py or
RedisVL. (put in doc link later)


### Better Querying

Both vector queries and Range (limit) queries are now available and
metadata is returned by default. The outputs are shown.

```python
>>> query = "foo"
>>> results = rds.similarity_search(query, k=1)
>>> print(results)
[Document(page_content='foo', metadata={'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '14', 'id': 'doc:users:657a47d7db8b447e88598b83da879b9d', 'score': '7.15255737305e-07'})]

>>> results = rds.similarity_search_with_score(query, k=1, return_metadata=False)
>>> print(results) # no metadata, but with scores
[(Document(page_content='foo', metadata={}), 7.15255737305e-07)]

>>> results = rds.similarity_search_limit_score(query, k=6, score_threshold=0.0001)
>>> print(len(results)) # range query (only above threshold even if k is higher)
4
```

### Custom metadata filtering

A big advantage of Redis in this space is being able to do filtering on
data stored alongside the vector itself. With the example above, the
following is now possible in langchain. The equivalence operators are
overridden to describe a new expression language that mimic that of
[redisvl](redisvl.com). This allows for arbitrarily long sequences of
filters that resemble SQL commands that can be used directly with vector
queries and range queries.

There are two interfaces by which to do so and both are shown. 

```python

>>> from langchain.vectorstores.redis import RedisFilter, RedisNum, RedisText

>>> age_filter = RedisFilter.num("age") > 18
>>> age_filter = RedisNum("age") > 18 # equivalent
>>> results = rds.similarity_search(query, filter=age_filter)
>>> print(len(results))
3

>>> job_filter = RedisFilter.text("job") == "engineer" 
>>> job_filter = RedisText("job") == "engineer" # equivalent
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2

# fuzzy match text search
>>> job_filter = RedisFilter.text("job") % "eng*"
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2


# combined filters (AND)
>>> combined = age_filter & job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
1

# combined filters (OR)
>>> combined = age_filter | job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
4
```

All the above filter results can be checked against the data above.


### Other

  - Issue: #3967 
  - Dependencies: No added dependencies
  - Tag maintainer: @hwchase17 @baskaryan @rlancemartin 
  - Twitter handle: @sampartee

---------

Co-authored-by: Naresh Rangan <naresh.rangan0@walmart.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
nikhilkjha d57d08fd01
Initial commit for comprehend moderator (#9665)
This PR implements a custom chain that wraps Amazon Comprehend API
calls. The custom chain is aimed to be used with LLM chains to provide
moderation capability that let’s you detect and redact PII, Toxic and
Intent content in the LLM prompt, or the LLM response. The
implementation accepts a configuration object to control what checks
will be performed on a LLM prompt and can be used in a variety of setups
using the LangChain expression language to not only detect the
configured info in chains, but also other constructs such as a
retriever.
The included sample notebook goes over the different configuration
options and how to use it with other chains.

###  Usage sample
```python
from langchain_experimental.comprehend_moderation import BaseModerationActions, BaseModerationFilters

moderation_config = { 
        "filters":[ 
                BaseModerationFilters.PII, 
                BaseModerationFilters.TOXICITY,
                BaseModerationFilters.INTENT
        ],
        "pii":{ 
                "action": BaseModerationActions.ALLOW, 
                "threshold":0.5, 
                "labels":["SSN"],
                "mask_character": "X"
        },
        "toxicity":{ 
                "action": BaseModerationActions.STOP, 
                "threshold":0.5
        },
        "intent":{ 
                "action": BaseModerationActions.STOP, 
                "threshold":0.5
        }
}

comp_moderation_with_config = AmazonComprehendModerationChain(
    moderation_config=moderation_config, #specify the configuration
    client=comprehend_client,            #optionally pass the Boto3 Client
    verbose=True
)

template = """Question: {question}

Answer:"""

prompt = PromptTemplate(template=template, input_variables=["question"])

responses = [
    "Final Answer: A credit card number looks like 1289-2321-1123-2387. A fake SSN number looks like 323-22-9980. John Doe's phone number is (999)253-9876.", 
    "Final Answer: This is a really shitty way of constructing a birdhouse. This is fucking insane to think that any birds would actually create their motherfucking nests here."
]
llm = FakeListLLM(responses=responses)

llm_chain = LLMChain(prompt=prompt, llm=llm)

chain = ( 
    prompt 
    | comp_moderation_with_config 
    | {llm_chain.input_keys[0]: lambda x: x['output'] }  
    | llm_chain 
    | { "input": lambda x: x['text'] } 
    | comp_moderation_with_config 
)

response = chain.invoke({"question": "A sample SSN number looks like this 123-456-7890. Can you give me some more samples?"})

print(response['output'])


```
### Output
```
> Entering new AmazonComprehendModerationChain chain...
Running AmazonComprehendModerationChain...
Running pii validation...
Found PII content..stopping..
The prompt contains PII entities and cannot be processed
```

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Anjan Biswas <anjanavb@amazon.com>
Co-authored-by: Jha <nikjha@amazon.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
William FH 1960ac8d25
token chunks (#9739)
Co-authored-by: Andrew <abatutin@gmail.com>
1 year ago
Bagatur 9731ce5a40
bump 273 (#9751) 1 year ago
Fabrizio Ruocco cacaf487c3
Azure Cognitive Search - update sdk b8, mod user agent, search with scores (#9191)
Description: Update Azure Cognitive Search SDK to version b8 (breaking
change)
Customizable User Agent.
Implemented Similarity search with scores 

@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Sergey Kozlov 135cb86215
Fix QuestionListOutputParser (#9738)
This PR fixes `QuestionListOutputParser` text splitting.

`QuestionListOutputParser` incorrectly splits numbered list text into
lines. If text doesn't end with `\n` , the regex doesn't capture the
last item. So it always returns `n - 1` items, and
`WebResearchRetriever.llm_chain` generates less queries than requested
in the search prompt.

How to reproduce:

```python
from langchain.retrievers.web_research import QuestionListOutputParser

parser = QuestionListOutputParser()

good = parser.parse(
    """1. This is line one.
    2. This is line two.
    """  # <-- !
)

bad = parser.parse(
    """1. This is line one.
    2. This is line two."""    # <-- No new line.
)

assert good.lines == ['1. This is line one.\n', '2. This is line two.\n'], good.lines
assert bad.lines == ['1. This is line one.\n', '2. This is line two.'], bad.lines
```

NOTE: Last item will not contain a line break but this seems ok because
the items are stripped in the
`WebResearchRetriever.clean_search_query()`.
1 year ago
Jurik-001 d04fe0d3ea
remove Value error "pyspark is not installed. Please install it with `pip i… (#9723)
Description: You cannot execute spark_sql with versions prior to 3.4 due
to the introduction of pyspark.errors in version 3.4.
And if you are below you get 3.4 "pyspark is not installed. Please
install it with pip nstall pyspark" which is not helpful. Also if you
not have pyspark installed you get already the error in init. I would
return all errors. But if you have a different idea feel free to
comment.

Issue: None
Dependencies: None
Maintainer:

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Margaret Qian 30151c99c7
Update Mosaic endpoint input/output api (#7391)
As noted in prior PRs (https://github.com/hwchase17/langchain/pull/6060,
https://github.com/hwchase17/langchain/pull/7348), the input/output
format has changed a few times as we've stabilized our inference API.
This PR updates the API to the latest stable version as indicated in our
docs: https://docs.mosaicml.com/en/latest/inference.html

The input format looks like this:

`{"inputs": [<prompt>]}
`

The output format looks like this:
`
{"outputs": [<output_text>]}
`
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Leonid Kuligin 87da56fb1e
Added a pdf parser based on DocAI (#9579)
#9578

---------

Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Naama Magami adb21782b8
Add del vector pgvector + adding modification time to confluence and google drive docs (#9604)
Description:
- adding implementation of delete for pgvector
- adding modification time in docs metadata for confluence and google
drive.

Issue:
https://github.com/langchain-ai/langchain/issues/9312

Tag maintainer: @baskaryan, @eyurtsev, @hwchase17, @rlancemartin.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Erick Friis 3e5cda3405
Hub Push Ergonomics (#9731)
Improves the hub pushing experience, returning a url instead of just a
commit hash.

Requires hub sdk 0.1.8
1 year ago
Tudor Golubenco dc30edf51c
Xata as a chat message memory store (#9719)
This adds Xata as a memory store also to the python version of
LangChain, similar to the [one for
LangChain.js](https://github.com/hwchase17/langchainjs/pull/2217).

I have added a Jupyter Notebook with a simple and a more complex example
using an agent.

To run the integration test, you need to execute something like:

```
XATA_API_KEY='xau_...' XATA_DB_URL="https://demo-uni3q8.eu-west-1.xata.sh/db/langchain"  poetry run pytest tests/integration_tests/memory/test_xata.py
```

Where `langchain` is the database you create in Xata.
1 year ago
William FH dff00ea91e
Chat Loaders (#9708)
Still working out interface/notebooks + need discord data dump to test
out things other than copy+paste

Update:
- Going to remove the 'user_id' arg in the loaders themselves and just
standardize on putting the "sender" arg in the extra kwargs. Then can
provide a utility function to map these to ai and human messages
- Going to move the discord one into just a notebook since I don't have
a good dump to test on and copy+paste maybe isn't the greatest thing to
support in v0
- Need to do more testing on slack since it seems the dump only includes
channels and NOT 1 on 1 convos
-

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Bagatur 0f48e6c36e
fix integration deps (#9722) 1 year ago
Bagatur a0800c9f15
rm google api core and add more dependency testing (#9721) 1 year ago
Andrew White 2bcf581a23
Added search parameters to qdrant max_marginal_relevance_search (#7745)
Adds the qdrant search filter/params to the
`max_marginal_relevance_search` method, which is present on others. I
did not add `offset` for pagination, because it's behavior would be
ambiguous in this setting (since we fetch extra and down-select).

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Kacper Łukawski <lukawski.kacper@gmail.com>
1 year ago
Tomaz Bratanic dacf96895a
Add the option to use separate LLMs for GraphCypherQA chain (#9689)
The Graph Chains are different in the way that it uses two LLMChains
instead of one like the retrievalQA chains. Therefore, sometimes you
want to use different LLM to generate the database query and to generate
the final answer.

This feature would make it more convenient to use different LLMs in the
same chain.

I have also renamed the Graph DB QA Chain to Neo4j DB QA Chain in the
documentation only as it is used only for Neo4j. The naming was
ambigious as it was the first graphQA chain added and wasn't sure how do
you want to spin it.
1 year ago
Bagatur f5ea725796
bump 272 (#9704) 1 year ago
Patrick Loeber 6bedfdf25a
Fix docs for AssemblyAIAudioTranscriptLoader (shorter import path) (#9687)
Uses the shorter import path

`from langchain.document_loaders import` instead of the full path
`from langchain.document_loaders.assemblyai`

Applies those changes to the docs and the unit test.

See #9667 that adds this new loader.
1 year ago
Nuno Campos 78ffcdd9a9 Lint 1 year ago
Nuno Campos 20d2c0571c Do not share executors between parent and child tasks 1 year ago
Harrison Chase 9963b32e59
Harrison/multi vector (#9700) 1 year ago
Leonid Ganeline c19888c12c
docstrings: `vectorstores` consistency (#9349)
 
- updated the top-level descriptions to a consistent format;
- changed several `ValueError` to `ImportError` in the import cases;
- changed the format of several internal functions from "name" to
"_name". So, these functions are not shown in the Top-level API
Reference page (with lists of classes/functions)
1 year ago
Kim Minjong d0ff0db698
Update ChatOpenAI._stream to respect finish_reason (#9672)
Currently, ChatOpenAI._stream does not reflect finish_reason to
generation_info. Change it to reflect that.

Same patch as https://github.com/langchain-ai/langchain/pull/9431 , but
also applies to _stream.
1 year ago
Patrick Loeber 5990651070
Add new document_loader: AssemblyAIAudioTranscriptLoader (#9667)
This PR adds a new document loader `AssemblyAIAudioTranscriptLoader`
that allows to transcribe audio files with the [AssemblyAI
API](https://www.assemblyai.com) and loads the transcribed text into
documents.

- Add new document_loader with class `AssemblyAIAudioTranscriptLoader`
- Add optional dependency `assemblyai`
- Add unit tests (using a Mock client)
- Add docs notebook

This is the equivalent to the JS integration already available in
LangChain.js. See the [LangChain JS docs AssemblyAI
page](https://js.langchain.com/docs/modules/data_connection/document_loaders/integrations/web_loaders/assemblyai_audio_transcription).

At its simplest, you can use the loader to get a transcript back from an
audio file like this:

```python
from langchain.document_loaders.assemblyai import AssemblyAIAudioTranscriptLoader

loader =  AssemblyAIAudioTranscriptLoader(file_path="./testfile.mp3")
docs = loader.load()
```

To use it, it needs the `assemblyai` python package installed, and the
environment variable `ASSEMBLYAI_API_KEY` set with your API key.
Alternatively, the API key can also be passed as an argument.

Twitter handles to shout out if so kindly 🙇
[@AssemblyAI](https://twitter.com/AssemblyAI) and
[@patloeber](https://twitter.com/patloeber)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Eugene Yurtsev 9e1dbd4b49 x 1 year ago
Eugene Yurtsev b88dfcb42a
Add indexing support (#9614)
This PR introduces a persistence layer to help with indexing workflows
into
vectostores.

The indexing code helps users to:

1. Avoid writing duplicated content into the vectostore
2. Avoid over-writing content if it's unchanged

Importantly, this keeps on working even if the content being written is
derived
via a set of transformations from some source content (e.g., indexing
children
documents that were derived from parent documents by chunking.)

The two main components are:

1. Persistence layer that keeps track of which keys were updated and
when.
Keeping track of the timestamp of updates, allows to clean up old
content
   safely, and with minimal complexity.
2. HashedDocument which is used to hash the contents (including
metadata) of
   the documents. We rely on the hashes for identifying duplicates.


The indexing code works with **ANY** document loader. To add
transformations
to the documents, users for now can add a custom document loader
that composes an existing loader together with document transformers.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
刘 方瑞 c215481531
Update default index type and metric type for MyScale vector store (#9353)
We update the default index type from `IVFFLAT` to `MSTG`, a new vector
type developed by MyScale.
1 year ago
Joshua Sundance Bailey a9c86774da
Anthropic: Allow the use of kwargs consistent with ChatOpenAI. (#9515)
- Description: ~~Creates a new root_validator in `_AnthropicCommon` that
allows the use of `model_name` and `max_tokens` keyword arguments.~~
Adds pydantic field aliases to support `model_name` and `max_tokens` as
keyword arguments. Ultimately, this makes `ChatAnthropic` more
consistent with `ChatOpenAI`, making the two classes more
interchangeable for the developer.
  - Issue: https://github.com/langchain-ai/langchain/issues/9510

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 342087bdfa
fix integration test imports (#9669) 1 year ago
Keras Conv3d cbaea8d63b
tair fix distance_type error, and add hybrid search (#9531)
- fix: distance_type error, 
- feature: Tair add hybrid search

---------

Co-authored-by: thw <hanwen.thw@alibaba-inc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev cd81e8a8f2
Add exclude to GenericLoader.from_file_system (#9539)
support exclude param in GenericLoader.from_filesystem

---------

Co-authored-by: Kyle Pancamo <50267605+KylePancamo@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Jacob Lee 278ef0bdcf
Adds ChatOllama (#9628)
@rlancemartin

---------

Co-authored-by: Adilkhan Sarsen <54854336+adolkhan@users.noreply.github.com>
Co-authored-by: Kim Minjong <make.dirty.code@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Nuno Campos 20ce283fa7 Format 1 year ago
Nuno Campos 6424b3cde0 Add another test 1 year ago
William FH da18e177f1 Update libs/langchain/langchain/schema/runnable/base.py
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Nuno Campos c326751085 Lint 1 year ago
Nuno Campos 6d19709b65 RunnableLambda, if func returns a Runnable, run it 1 year ago
Nuno Campos 677da6a0fd Add support for async funcs in RunnableSequence 1 year ago
Nuno Campos 1751fe114d Add one more test 1 year ago
Nuno Campos 882b97cfd2 Lint 1 year ago
Nuno Campos 3ddabe8b2c Code review 1 year ago
Nuno Campos fdcd50aab4 Extend test 1 year ago
Nuno Campos 9777c2801d Update method and docstring 1 year ago
Nuno Campos 93bbf67afc WIP
Add test

Add test

Lint
1 year ago
Nuno Campos c184be5511 Use a shared executor for all parallel calls 1 year ago
Nuno Campos db4b256a28 Add error for batch of 0 1 year ago
Nuno Campos 3458489936 Lint 1 year ago
Nuno Campos e420bf22b6 Lint 1 year ago
Nuno Campos cc83f54694 L:int 1 year ago
Nuno Campos d414d47c78 Use a shared executor for all parallel calls 1 year ago
Bagatur a40c12bb88
Update the nlpcloud connector after some changes on the NLP Cloud API (#9586)
- Description: remove some text generation deprecated parameters and
update the embeddings doc,
- Tag maintainer: @rlancemartin
1 year ago
Bagatur e2e582f1f6
Fixed source key name for docugami loader (#8598)
The Docugami loader was not returning the source metadata key. This was
triggering this exception when used with retrievers, per
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/prompt_template.py#L193C1-L195C41

The fix is simple and just updates the metadata key name for the
document each chunk is sourced from, from "name" to "source" as
expected.

I tested by running the python notebook that has an end to end scenario
in it.

Tagging DataLoader maintainers @rlancemartin @eyurtsev
1 year ago
karynzv 5508baf1eb
Add CrateDB prompt (#9657)
Adds a prompt template for the CrateDB SQL dialect.
1 year ago
Bagatur a8e8a31b41 Merge branch 'master' into bagatur/locals_in_config 1 year ago
Bagatur ef2500584c fmt 1 year ago
Zizhong Zhang 8a03836160
docs: fix PromptGuard docs (#9659)
Fix PromptGuard docs. Noticed several trivial issues on the docs when
integrating the new class.
cc @baskaryan
1 year ago
Guy Korland 39a5d02225
Cleanup of ruff warnings use isinstance() instead of type() (#9655)
Minor cosmetic PR just cleanup of `ruff` warnings use `isinstance()`
instead of `type()`
1 year ago
Joseph McElroy 2a06e7b216
ElasticsearchStore: improve error logging for adding documents (#9648)
Not obvious what the error is when you cannot index. This pr adds the
ability to log the first errors reason, to help the user diagnose the
issue.

Also added some more documentation for when you want to use the
vectorstore with an embedding model deployed in elasticsearch.

Credit: @elastic and @phoey1
1 year ago
Julien Salinas f1072cc31f
Merge branch 'master' into master 1 year ago
Jun Liu b379c5f9c8
Fixed the error on ConfluenceLoader when content_format=VIEW and `keep_markdown_format`=True (#9633)
- Description: a description of the change

when I set `content_format=ContentFormat.VIEW` and
`keep_markdown_format=True` on ConfluenceLoader, it shows the following
error:
```
langchain/document_loaders/confluence.py", line 459, in process_page
    page["body"]["storage"]["value"], heading_style="ATX"
KeyError: 'storage'
```
The reason is because the content format was set to `view` but it was
still trying to get the content from `page["body"]["storage"]["value"]`.

Also added the other content formats which are supported by Atlassian
API

https://stackoverflow.com/questions/34353955/confluence-rest-api-expanding-page-body-when-retrieving-page-by-title/34363386#34363386

  - Issue: the issue # it fixes (if applicable),

Not applicable.

  - Dependencies: any dependencies required for this change,

Added optional dependency `markdownify` if anyone wants to extract in
markdown format.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Gabriel Fu b2d9970fc1
Allow specifying dtype in `langchain.llms.VLLM` (#9635)
- Description: add `dtype` argument for VLLM 
  - Issue: #9593 
  - Dependencies: none
  - Tag maintainer: @hwchase17, @baskaryan
1 year ago
anifort 900c1f3e8d
Add support for structured data sources with google enterprise search (#9037)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
- Description: Added the capability to handles structured data from
google enterprise search,
- Issue: Retriever failed when underline search engine was integrated
with structured data,
  - Dependencies: google-api-core
  - Tag maintainer: @jarokaz
  - Twitter handle: anifort

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Christos Aniftos <aniftos@google.com>
Co-authored-by: Holt Skinner <13262395+holtskinner@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Harrison Chase 02545a54b3
python repl improvement for csv agent (#9618) 1 year ago
Erick Friis fc64e6349e
Hub stub updates (#9577)
Updates the hub stubs to not fail when no api key is found. For
supporting singleton tenants and default values from sdk 0.1.6.

Also adds the ability to define is_public and description for backup
repo creation on push.
1 year ago
Kim Minjong ca8232a3c1
Update BaseChatModel.astream to respect generation_info (#9430)
Currently, generation_info is not respected by only reflecting messages
in chunks. Change it to add generations so that generation chunks are
merged properly.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Bagatur 81163e3c0c
parent retriever nit (#9570)
if ids are nullable seems like they should have default val None.
mirrors VectorStore interface as well. cc @mcantillon21 @jacoblee93
1 year ago
Myeongseop Kim f1e602996a
import tqdm.auto instead of tqdm tqdm for OpenAIEmbeddings (#9584)
- Description: current code does not work very well on jupyter notebook,
so I changed the code so that it imports `tqdm.auto` instead.
  - Issue: #9582 
  - Dependencies: N/A
  - Tag maintainer: @hwchase17, @baskaryan
  - Twitter handle: N/A

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
1 year ago
Predrag Gruevski d564ec944c
`poetry lock` the experimental package. (#9478) 1 year ago
Predrag Gruevski 65e893b9cd
`poetry lock` on langchain. (#9476) 1 year ago
Predrag Gruevski 3c7cc4d440
Test experimental package with `langchain` on `master` branch. (#9621)
It's possible that langchain-experimental works fine with the latest
*published* langchain, but is broken with the langchain on `master`.
Unfortunately, you can see this is currently the case — this is why this
PR also includes a minor fix for the `langchain` package itself.

We want to catch situations like that *before* releasing a new
langchain, hence this test.
1 year ago
Eugene Yurtsev 3408810748
Add batch util (#9620)
Add `batch` utility to langchain
1 year ago
Bagatur 2b663089b5
bump 271 (#9615) 1 year ago
klae01 b868ef23bc
Add AINetwork blockchain toolkit integration (#9527)
# Description
This PR introduces a new toolkit for interacting with the AINetwork
blockchain. The toolkit provides a set of tools for performing various
operations on the AINetwork blockchain, such as transferring AIN,
reading and writing values to the blockchain database, managing apps,
setting rules and owners.

# Dependencies
[ain-py](https://github.com/ainblockchain/ain-py) >= 1.0.2

# Misc
The example notebook
(langchain/docs/extras/integrations/toolkits/ainetwork.ipynb) is in the
PR

---------

Co-authored-by: kriii <kriii@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur e99ef12cb1
Bagatur/litellm model name (#9613)
Co-authored-by: ishaan-jaff <ishaanjaffer0324@gmail.com>
1 year ago
Harrison Chase 1720e99397
add variables for field names (#9563) 1 year ago
Anthony Mahanna dfb9ff1079
bugfix: ArangoDB Empty Schema Case (#9574)
- Introduces a conditional in `ArangoGraph.generate_schema()` to exclude
empty ArangoDB Collections from the schema
- Add empty collection test case

Issue: N/A
Dependencies: None
1 year ago
Philippe PRADOS d4c49b16e4
Fix ChatMessageHistory (#9594)
The initialization of the array of ChatMessageHistory is buggy.
The list is shared with all instances.
1 year ago
toddkim95 fba29f203a
Add to support polars (#9610)
### Description
Polars is a DataFrame interface on top of an OLAP Query Engine
implemented in Rust.
Polars is faster to read than pandas, so I'm looking forward to seeing
it added to the document loader.

### Dependencies
polars (https://pola-rs.github.io/polars-book/user-guide/)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Aashish Saini 3c4f32c8b8
Replacing Exception type from ValueError to ImportError (#9588)
I have restructured the code to ensure uniform handling of ImportError.
In place of previously used ValueError, I've adopted the standard
practice of raising ImportError with explanatory messages. This
modification enhances code readability and clarifies that any problems
stem from module importation.

@eyurtsev , @baskaryan 

Thanks
1 year ago
Julien Salinas 033b874701 Remove some deprecated text generation parameters. 1 year ago
Bagatur 4e7e6bfe0a revert 1 year ago
Bagatur a9bf409a09 param 1 year ago
Bagatur fa478638a9 Merge branch 'master' into bagatur/locals_in_config 1 year ago
Bagatur 182b059bf4 param 1 year ago
Bagatur 04f2d69b83
improve confluence doc loader param validation (#9568) 1 year ago
Zizhong Zhang 00eff8c4a7
feat: Add PromptGuard integration (#9481)
Add PromptGuard integration
-------
There are two approaches to integrate PromptGuard with a LangChain
application.

1. PromptGuardLLMWrapper
2. functions that can be used in LangChain expression.

-----
- Dependencies
`promptguard` python package, which is a runtime requirement if you'd
try out the demo.

- @baskaryan @hwchase17 Thanks for the ideas and suggestions along the
development process.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Sathindu 652c542b2f
fix: Imports for the ConfluenceLoader:process_page (#9432)
### Description
When we're loading documents using `ConfluenceLoader`:`load` function
and, if both `include_comments=True` and `keep_markdown_format=True`,
we're getting an error saying `NameError: free variable 'BeautifulSoup'
referenced before assignment in enclosing scope`.
    
    loader = ConfluenceLoader(url="URI", token="TOKEN")
    documents = loader.load(
        space_key="SPACE", 
        include_comments=True, 
        keep_markdown_format=True, 
    )

This happens because previous imports only consider the
`keep_markdown_format` parameter, however to include the comments, it's
using `BeautifulSoup`

Now it's fixed to handle all four scenarios considering both
`include_comments` and `keep_markdown_format`.

### Twitter
`@SathinduGA`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Mike Salvatore 7c0b1b8171
Add session to ConfluenceLoader.__init__() (#9437)
- Description: Allows the user of `ConfluenceLoader` to pass a
`requests.Session` object in lieu of an authentication mechanism
- Issue: None
- Dependencies: None
- Tag maintainer: @hwchase17
1 year ago
Kim Minjong 3d1095218c
Update ChatOpenAI._astream to respect finish_reason (#9431)
Currently, ChatOpenAI._astream does not reflect finish_reason to
generation_info. Change it to reflect that.
1 year ago
Matthew Zeiler 949b2cf177
Improvements to the Clarifai integration (#9290)
- Improved docs
- Improved performance in multiple ways through batching, threading,
etc.
 - fixed error message 
 - Added support for metadata filtering during similarity search.

@baskaryan PTAL
1 year ago
ricki-epsilla 66a47d9a61
add Epsilla vectorstore (#9239)
[Epsilla](https://github.com/epsilla-cloud/vectordb) vectordb is an
open-source vector database that leverages the advanced academic
parallel graph traversal techniques for vector indexing.
This PR adds basic integration with
[pyepsilla](https://github.com/epsilla-cloud/epsilla-python-client)(Epsilla
vectordb python client) as a vectorstore.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur dda5b1e370
Bagatur/doc loader confluence (#9524)
Co-authored-by: chanjetsdp <chanjetsdp@chanjet.com>
1 year ago
Predrag Gruevski de1f63505b
Add `py.typed` file to `langchain-experimental`. (#9557)
The package is linted with mypy, so its type hints are correct and
should be exposed publicly. Without this file, the type hints remain
private and cannot be used by downstream users of the package.
1 year ago
Raynor Chavez 973866c894
fix: Updated marqo integration for marqo version 1.0.0+ (#9521)
- Description: Updated marqo integration to use tensor_fields instead of
non_tensor_fields. Upgraded marqo version to 1.2.4
  - Dependencies: marqo 1.2.4

---------

Co-authored-by: Raynor Kirkson E. Chavez <raynor.chavez@192.168.254.171>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur c7a5bb6031
bump 270 (#9549) 1 year ago
Nuno Campos 28e1ee4891
Nc/small fixes 21aug (#9542)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->
1 year ago
Bagatur d11841d760
bump 269 (#9487) 1 year ago
axiangcoding 05aa02005b
feat(llms): support ERNIE Embedding-V1 (#9370)
- Description: support [ERNIE
Embedding-V1](https://cloud.baidu.com/doc/WENXINWORKSHOP/s/alj562vvu),
which is part of ERNIE ecology
- Issue: None
- Dependencies: None
- Tag maintainer: @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
José Ferraz Neto f116e10d53
Add SharePoint Loader (#4284)
- Added a loader (`SharePointLoader`) that can pull documents (`pdf`,
`docx`, `doc`) from the [SharePoint Document
Library](https://support.microsoft.com/en-us/office/what-is-a-document-library-3b5976dd-65cf-4c9e-bf5a-713c10ca2872).
- Added a Base Loader (`O365BaseLoader`) to be used for all Loaders that
use [O365](https://github.com/O365/python-o365) Package
- Code refactoring on `OneDriveLoader` to use the new `O365BaseLoader`.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Utku Ege Tuluk bb4f7936f9
feat(llms): add streaming support to textgen (#9295)
- Description: Added streaming support to the textgen component in the
llms module.
  - Dependencies: websocket-client = "^1.6.1"
1 year ago
Eugene Yurtsev 02c5c13a6e
Fast linters go first (#9501)
Proposal to reverse the order of linters based on the principle of
running the
fast ones first.
1 year ago
Ofer Mendelevitch a758496236
Fixed issue with metadata in query (#9500)
- Description: Changed metadata retrieval so that it combines Vectara
doc level and part level metadata
  - Tag maintainer: @rlancemartin
  - Twitter handle: @ofermend
1 year ago
Eugene Yurtsev e51bccdb28
Add strict flag to the JSON parser (#9471)
This updates the default configuration since I think it's almost always
what we want to happen. But we should evaluate whether there are any issues.
1 year ago
Taqi Jaffri 5cd244e9b7 CR feedback 1 year ago
Predrag Gruevski be9bc62f8b
Fix bash test regex for Linux under WSL2. (#9475)
It fails with `Permission denied` and not `not found`. Both seem
reasonable.
1 year ago
Lorenzo 5b3dbf12a5
Uniform valid suffixes and clarify exceptions (#9463)
**Description**:
- Uniformed the current valid suffixes (file formats) for loading agents
from hubs and files (to better handle future additions);
 - Clarified exception messages (also in unit test).
1 year ago
Brendan Collins 9f545825b7
Added Geometry Validation, Geometry Metadata, and WKT instead of Python str() to GeoDataFrame Loader (#9466)
@rlancemartin The current implementation within `Geopandas.GeoDataFrame`
loader uses the python builtin `str()` function on the input geometries.
While this looks very close to WKT (Well known text), Python's str
function doesn't guarantee that.

In the interest of interop., I've changed to the of use `wkt` property
on the Shapely geometries for generating the text representation of the
geometries.

Also, included here:
- validation of the input `page_content_column` as being a GeoSeries.
- geometry `crs` (Coordinate Reference System) / bounds
(xmin/ymin/xmax/ymax) added to Document metadata. Having the CRS is
critical... having the bounds is just helpful!

I think there is a larger question of "Should the geometry live in the
`page_content`, or should the record be better summarized and tuck the
geom into metadata?" ...something for another day and another PR.
1 year ago
Kacper Łukawski 616e728ef9
Enhance qdrant vs using async embed documents (#9462)
This is an extension of #8104. I updated some of the signatures so all
the tests pass.

@danhnn I couldn't commit to your PR, so I created a new one. Thanks for
your contribution!

@baskaryan Could you please merge it?

---------

Co-authored-by: Danh Nguyen <dnncntt@gmail.com>
1 year ago
Matt Robinson 83d2a871eb
fix: apply unstructured preprocess functions (#9473)
### Summary

Fixes a bug from #7850 where post processing functions in Unstructured
loaders were not apply. Adds a assertion to the test to verify the post
processing function was applied and also updates the explanation in the
example notebook.
1 year ago
William FH 292ae8468e
Let you specify run id in trace as chain group (#9484)
I think we'll deprecate this soon anyway but still nice to be able to
fetch the run id
1 year ago
Predrag Gruevski df8e35fd81
Remove incorrect ABC from two Elasticsearch classes. (#9470)
Neither is an ABC because their own example code instantiates them directly.
1 year ago
Predrag Gruevski 82f28ca9ef
`ChatPromptTemplate` is not an `ABC`, it's instantiated directly. (#9468)
Its own `__add__` method constructs `ChatPromptTemplate` objects
directly, it cannot be abstract.

Found while debugging something else with @nfcampos.
1 year ago
vamseeyarla 82fb56b79c
Issue 9401 - SequentialChain runs the same callbacks over and over in async mode (#9452)
Issue: https://github.com/langchain-ai/langchain/issues/9401

In the Async mode, SequentialChain implementation seems to run the same
callbacks over and over since it is re-using the same callbacks object.

Langchain version: 0.0.264, master

The implementation of this aysnc route differs from the sync route and
sync approach follows the right pattern of generating a new callbacks
object instead of re-using the old one and thus avoiding the cascading
run of callbacks at each step.

Async mode:
```
        _run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
        callbacks = _run_manager.get_child()
        ...
        for i, chain in enumerate(self.chains):
            _input = await chain.arun(_input, callbacks=callbacks)
            ...
```

Regular mode:
```
        _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
        for i, chain in enumerate(self.chains):
            _input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
            ...
```

Notice how we are reusing the callbacks object in the Async code which
will have a cascading effect as we run through the chain. It runs the
same callbacks over and over resulting in issues.

Solution:
Define the async function in the same pattern as the regular one and
added tests.
---------

Co-authored-by: vamsee_yarlagadda <vamsee.y@airbnb.com>
1 year ago
William FH c29fbede59
Wfh/rm num repetitions (#9425)
Makes it hard to do test run comparison views and we'd probably want to
just run multiple runs right now
1 year ago
Predrag Gruevski eee0d1d0dd
Update repository links in the package metadata. (#9454) 1 year ago
Bagatur 50b8f4dcc7
bump 268 (#9455) 1 year ago
Nuno Campos 354c42afd2 Lint 1 year ago
Nuno Campos 4452314aab Merge branch 'master' into bagatur/locals_in_config 1 year ago
Nuno Campos d5eb228874
Add kwargs to all other optional runnable methods (#9439)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->
1 year ago
Leonid Ganeline a3dd4dcadf
📖 docstrings `retrievers` consistency (#9422)
📜 
- updated the top-level descriptions to a consistent format;
- changed the format of several 100% internal functions from "name" to
"_name". So, these functions are not shown in the Top-level API
Reference page (with lists of classes/functions)
1 year ago
Nuno Campos 9417961b17
Add lock on tee peer cleanup (#9446)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->
1 year ago
Nuno Campos d3f10d2f4f Update test 1 year ago
Nuno Campos 6ae58da668 Assign defaults in batch calls 1 year ago
Nuno Campos ddcb4ff5fb Li t 1 year ago
Nuno Campos 1baedc4e18 Move patch_config 1 year ago
Nuno Campos 46f3850794 Lint 1 year ago
Nuno Campos 24a197f96a Merge branch 'master' into bagatur/locals_in_config 1 year ago
Nuno Campos 8ddaaf3d41 Move config helpers 1 year ago
Nuno Campos a5e7dcec61 Lint 1 year ago
Nuno Campos c1b1666ec8 Ensure config defaults apply even when a config is passed in 1 year ago
Nuno Campos 7fe474d198 Update snapshots 1 year ago
Jacob Lee 0689628489
Adds streaming for runnable maps (#9283)
@nfcampos @baskaryan

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Bagatur ab21af71be wip 1 year ago
Bagatur 6f69b19ff5 wip tests 1 year ago
Bagatur 9e906c39ba nit 1 year ago
Bagatur 6b0a849f59 fix 1 year ago
Bagatur c447e9a854 cr 1 year ago
Bagatur bd80cad6db add 1 year ago
Bagatur 8c1a528c71 cr 1 year ago
Bagatur 25cbcd9374 merge 1 year ago
Aashish Saini ce78877a87
Replaced instances of raising ValueError with raising ImportError. (#9388)
Refactored code to ensure consistent handling of ImportError. Replaced
instances of raising ValueError with raising ImportError.

The choice of raising a ValueError here is somewhat unconventional and
might lead to confusion for anyone reading the code. Typically, when
dealing with import-related errors, the recommended approach is to raise
an ImportError with a descriptive message explaining the issue. This
provides a clearer indication that the problem is related to importing
the required module.

@hwchase17 , @baskaryan , @eyurtsev 

Thanks
Aashish

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 8c986221e4
make openapi_schema_pydantic opt (#9408) 1 year ago
Eugene Yurtsev 77b359edf5
More missing type annotations (#9406)
This PR fills in more missing type annotations on pydantic models. 

It's OK if it missed some annotations, we just don't want it to get
annotations wrong at this stage.

I'll do a few more passes over the same files!
1 year ago
Bagatur a69d1b84f4
bump 267 (#9403) 1 year ago
Nuno Campos c0d67420e5
Use a submodule for pydantic v1 compat (#9371)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. These live is docs/extras
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17, @rlancemartin.
 -->
1 year ago
Bagatur 995ef8a7fc
unpin pydantic (#9356) 1 year ago
Tong Gao 3c8e9a9641
Fix typos in eval_chain.py (#9365)
Fixed two minor typos.
1 year ago
Eugene Yurtsev 2673b3a314
Create pydantic v1 namespace in langchain (#9254)
Create pydantic v1 namespace in langchain experimental
1 year ago
Eugene Yurtsev 4c2de2a7f2
Adding missing types in some pydantic models (#9355)
* Adding missing types in some pydantic models -- this change is
required for making the code work with pydantic v2.
1 year ago
Harrison Chase 1c089cadd7
fix import v2 (#9346) 1 year ago
qqjettkgjzhxmwj 84a97d55e1
Fix typo in llm_router.py (#9322)
Fix typo
1 year ago
Joe Reuter 09aa1eac03
Airbyte loaders: Fix last_state getter (#9314)
This PR fixes the Airbyte loaders when doing incremental syncs. The
notebooks are calling out to access `loader.last_state` to get the
current state of incremental syncs, but this didn't work due to a
refactoring of how the loaders are structured internally in the original
PR.

This PR fixes the issue by adding a `last_state` property that forwards
the state correctly from the CDK adapter.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Jakub Kuciński 8bebc9206f
Add improved sources splitting in BaseQAWithSourcesChain (#8716)
## Type:
Improvement

---

## Description:
Running QAWithSourcesChain sometimes raises ValueError as mentioned in
issue #7184:
```
ValueError: too many values to unpack (expected 2)
Traceback:

    response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
    raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
    self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
    answer, sources = re.split(r"SOURCES:\s", answer)
```
This is due to LLM model generating subsequent question, answer and
sources, that is complement in a similar form as below:
```
<final_answer>
SOURCES: <sources>
QUESTION: <new_or_repeated_question>
FINAL ANSWER: <new_or_repeated_final_answer>
SOURCES: <new_or_repeated_sources>
```
It leads the following line
```
 re.split(r"SOURCES:\s", answer)
```
to return more than 2 elements and result in ValueError. The simple fix
is to split also with "QUESTION:\s" and take the first two elements:
```
answer, sources = re.split(r"SOURCES:\s|QUESTION:\s", answer)[:2]
```

Sometimes LLM might also generate some other texts, like alternative
answers in a form:
```
<final_answer_1>
SOURCES: <sources>

<final_answer_2>
SOURCES: <sources>

<final_answer_3>
SOURCES: <sources>
```
In such cases it is the best to split previously obtained sources with
new line:
```
sources = re.split(r"\n", sources.lstrip())[0]
```



---

## Issue:
Resolves #7184

---

## Maintainer:
@baskaryan
1 year ago
Bagatur a3c79b1909
Add tiktoken integration dep (#9332) 1 year ago
Bagatur ba5fbaba70
bump 266 (#9296) 1 year ago
axiangcoding 63601551b1
fix(llms): improve the ernie chat model (#9289)
- Description: improve the ernie chat model.
   - fix missing kwargs to payload
   - new test cases
   - add some debug level log
   - improve description
- Issue: None
- Dependencies: None
- Tag maintainer: @baskaryan
1 year ago
Daniel Chalef 1d55141c50
zep/new ZepVectorStore (#9159)
- new ZepVectorStore class
- ZepVectorStore unit tests
- ZepVectorStore demo notebook
- update zep-python to ~1.0.2

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
William FH 2519580994
Add Schema Evals (#9228)
Simple eval checks for whether a generation is valid json and whether it
matches an expected dict
1 year ago
Kenny 74a64cfbab
expose output key to create_openai_fn_chain (#9155)
I quick change to allow the output key of create_openai_fn_chain to
optionally be changed.

@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur afba2be3dc
update openai functions docs (#9278) 1 year ago
Bagatur 9abf60acb6
Bagatur/vectara regression (#9276)
Co-authored-by: Ofer Mendelevitch <ofer@vectara.com>
Co-authored-by: Ofer Mendelevitch <ofermend@gmail.com>
1 year ago
Xiaoyu Xee b30f449dae
Add dashvector vectorstore (#9163)
## Description
Add `Dashvector` vectorstore for langchain

- [dashvector quick
start](https://help.aliyun.com/document_detail/2510223.html)
- [dashvector package description](https://pypi.org/project/dashvector/)

## How to use
```python
from langchain.vectorstores.dashvector import DashVector

dashvector = DashVector.from_documents(docs, embeddings)
```

---------

Co-authored-by: smallrain.xuxy <smallrain.xuxy@alibaba-inc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur bfbb97b74c
Bagatur/deeplake docs fixes (#9275)
Co-authored-by: adilkhan <adilkhan.sarsen@nu.edu.kz>
1 year ago
Kunj-2206 1b3942ba74
Added BittensorLLM (#9250)
Description: Adding NIBittensorLLM via Validator Endpoint to langchain
llms
Tag maintainer: @Kunj-2206

Maintainer responsibilities:
    Models / Prompts: @hwchase17, @baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Toshish Jawale 852722ea45
Improvements in Nebula LLM (#9226)
- Description: Added improvements in Nebula LLM to perform auto-retry;
more generation parameters supported. Conversation is no longer required
to be passed in the LLM object. Examples are updated.
  - Issue: N/A
  - Dependencies: N/A
  - Tag maintainer: @baskaryan 
  - Twitter handle: symbldotai

---------

Co-authored-by: toshishjawale <toshish@symbl.ai>
1 year ago
Bagatur 358562769a
Bagatur/refac faiss (#9076)
Code cleanup and bug fix in deletion
1 year ago
Bagatur 3eccd72382
pin pydantic (#9274)
don't want default to be v2 yet
1 year ago
Erick Friis 76d09b4ed0
hub push/pull (#9225)
Description: Adds push/pull functions to interact with the hub
Issue: n/a
Dependencies: `langchainhub`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Alex Gamble cf17c58b47
Update documentation for the Context integration with new URL and features (#9259)
Update documentation and URLs for the Langchain Context integration.

We've moved from getcontext.ai to context.ai \o/

Thanks in advance for the review!
1 year ago
Eugene Yurtsev a091b4bf4c
Update testing workflow to test with both pydantic versions (#9206)
* PR updates test.yml to test with both pydantic versions
* Code should be refactored to make it easier to do testing in matrix
format w/ packages
* Added steps to assert that pydantic version in the environment is as
expected
1 year ago
Bagatur e0162baa3b
add oai sched tests (#9257) 1 year ago
Joseph McElroy 5e9687a196
Elasticsearch self-query retriever (#9248)
Now with ElasticsearchStore VectorStore merged, i've added support for
the self-query retriever.

I've added a notebook also to demonstrate capability. I've also added
unit tests.

**Credit**
@elastic and @phoey1 on twitter.
1 year ago
Eugene Yurtsev 0470198fb5
Remove packages for pydantic compatibility (#9217)
# Poetry updates

This PR updates LangChains poetry file to remove
any dependencies that aren't pydantic v2 compatible yet.

All packages remain usable under pydantic v1, and can be installed
separately. 

## Bumping the following packages:

* langsmith

## Removing the following packages

not used in extended unit-tests:

* zep-python, anthropic, jina, spacy, steamship, betabageldb

not used at all:

* octoai-sdk

Cleaning up extras w/ for removed packages.

## Snapshots updated

Some snapshots had to be updated due to a change in the data model in
langsmith. RunType used to be Union of Enum and string and was changed
to be string only.
1 year ago
Bagatur e986afa13a
bump 265 (#9253) 1 year ago
Hech 4b505060bd
fix: max_marginal_relevance_search and docs in Dingo (#9244) 1 year ago
axiangcoding 664ff28cba
feat(llms): support ernie chat (#9114)
Description: support ernie (文心一言) chat model
Related issue: #7990
Dependencies: None
Tag maintainer: @baskaryan
1 year ago
Bharat Ramanathan 08a8363fc6
feat(integration): Add support to serialize protobufs in WandbTracer (#8914)
This PR adds serialization support for protocol bufferes in
`WandbTracer`. This allows code generation chains to be visualized.
Additionally, it also fixes a minor bug where the settings are not
honored when a run is initialized before using the `WandbTracer`

@agola11

---------

Co-authored-by: Bharat Ramanathan <ramanathan.parameshwaran@gohuddl.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Joshua Sundance Bailey ef0664728e
ArcGISLoader update (#9240)
Small bug fixes and added metadata based on user feedback. This PR is
from the author of https://github.com/langchain-ai/langchain/pull/8873 .
1 year ago
Joseph McElroy eac4ddb4bb
Elasticsearch Store Improvements (#8636)
Todo:
- [x] Connection options (cloud, localhost url, es_connection) support
- [x] Logging support
- [x] Customisable field support
- [x] Distance Similarity support 
- [x] Metadata support
  - [x] Metadata Filter support 
- [x] Retrieval Strategies
  - [x] Approx
  - [x] Approx with Hybrid
  - [x] Exact
  - [x] Custom 
  - [x] ELSER (excluding hybrid as we are working on RRF support)
- [x] integration tests 
- [x] Documentation

👋 this is a contribution to improve Elasticsearch integration with
Langchain. Its based loosely on the changes that are in master but with
some notable changes:

## Package name & design improvements
The import name is now `ElasticsearchStore`, to aid discoverability of
the VectorStore.

```py
## Before
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch, ElasticKnnSearch

## Now
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```

## Retrieval Strategy support
Before we had a number of classes, depending on the strategy you wanted.
`ElasticKnnSearch` for approx, `ElasticVectorSearch` for exact / brute
force.

With `ElasticsearchStore` we have retrieval strategies:

### Approx Example
Default strategy for the vast majority of developers who use
Elasticsearch will be inferring the embeddings from outside of
Elasticsearch. Uses KNN functionality of _search.

```py
        texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index"
        )
        output = docsearch.similarity_search("foo", k=1)
```

### Approx, with hybrid
Developers who want to search, using both the embedding and the text
bm25 match. Its simple to enable.

```py
 texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True)
        )
        output = docsearch.similarity_search("foo", k=1)
```

### Approx, with `query_model_id`
Developers who want to infer within Elasticsearch, using the model
loaded in the ml node.

This relies on the developer to setup the pipeline and index if they
wish to embed the text in Elasticsearch. Example of this in the test.

```py
 texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ApproxRetrievalStrategy(
                query_model_id="sentence-transformers__all-minilm-l6-v2"
            ),
        )
        output = docsearch.similarity_search("foo", k=1)
```

### I want to provide my own custom Elasticsearch Query
You might want to have more control over the query, to perform
multi-phase retrieval such as LTR, linearly boosting on document
parameters like recently updated or geo-distance. You can do this with
`custom_query_fn`

```py
        def my_custom_query(query_body: dict, query: str) -> dict:
            return {"query": {"match": {"text": {"query": "bar"}}}}

        texts = ["foo", "bar", "baz"]
        docsearch = ElasticsearchStore.from_texts(
            texts, FakeEmbeddings(), **elasticsearch_connection, index_name=index_name
        )
        docsearch.similarity_search("foo", k=1, custom_query=my_custom_query)

```

### Exact Example
Developers who have a small dataset in Elasticsearch, dont want the cost
of indexing the dims vs tradeoff on cost at query time. Uses
script_score.

```py
        texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ExactRetrievalStrategy(),
        )
        output = docsearch.similarity_search("foo", k=1)
```

### ELSER Example
Elastic provides its own sparse vector model called ELSER. With these
changes, its really easy to use. The vector store creates a pipeline and
index thats setup for ELSER. All the developer needs to do is configure,
ingest and query via langchain tooling.

```py
texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.SparseVectorStrategy(),
        )
        output = docsearch.similarity_search("foo", k=1)

```

## Architecture
In future, we can introduce new strategies and allow us to not break bwc
as we evolve the index / query strategy.

## Credit
On release, could you credit @elastic and @phoey1 please? Thank you!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Divyansh Garg 9529483c2a
Improve MultiOn client toolkit prompts (#9222)
- Updated prompts for the MultiOn toolkit for better functionality
- Non-blocking but good to have it merged to improve the overall
performance for the toolkit
 
@hinthornw @hwchase17

---------

Co-authored-by: Naman Garg <ngarg3@binghamton.edu>
1 year ago
William FH c478fc208e
Default On Retry (#9230)
Base callbacks don't have a default on retry event

Fix #8542

---------

Co-authored-by: landonsilla <landon.silla@stepstone.com>
1 year ago
Leonid Ganeline 93dd499997
docstrings: `document_loaders` consistency 3 (#9216)
Updated docstrings into the consistent format (probably, the last update
for the `document_loaders`.
1 year ago
Kshitij Wadhwa a69cb95850
track langchain usage for Rockset (#9229)
Add ability to track langchain usage for Rockset. Rockset's new python
client allows setting this. To prevent old clients from failing, it
ignore if setting throws exception (we can't track old versions)

Tested locally with old and new Rockset python client

cc @baskaryan
1 year ago
Leonid Ganeline 7810ea5812
docstrings: `chat_models` consistency (#9227)
Updated docstrings into the consistent format.
1 year ago
William FH b0896210c7
Return feedback with failed response if there's an error (#9223)
In Evals
1 year ago
William FH 7124f2ebfa
Parent Doc Retriever (#9214)
2 things:
- Implement the private method rather than the public one so callbacks
are handled properly
- Add search_kwargs (Open to not adding this if we are trying to
deprecate this UX but seems like as a user i'd assume similar args to
the vector store retriever. In fact some may assume this implements the
same interface but I'm not dealing with that here)
-
1 year ago
Harrison Chase 3f601b5809
add async method in (#9204) 1 year ago
Clark 03ea0762a1
fix(jinachat): related to #9197 (#9200)
related to: https://github.com/langchain-ai/langchain/issues/9197

---------

Co-authored-by: qianjun.wqj <qianjun.wqj@alibaba-inc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev 4f1feaca83
Wrap OpenAPI features in conditionals for pydantic v2 compatibility (#9205)
Wrap OpenAPI in conditionals for pydantic v2 compatibility.
1 year ago
Glauco Custódio 89be10f6b4
add ttl to RedisCache (#9068)
Add `ttl` (time to live) to `RedisCache`
1 year ago
Eugene Yurtsev 04bc5f3b18
Conditionally add pydantic v1 to namespace (#9202)
Conditionally add pydantic_v1 to namespace.
1 year ago
shibuiwilliam feec422bf7
fix logging to logger (#9192)
# What
- fix logging to logger
1 year ago
Bagatur 5935767056
bump lc 246, lce 9 (#9207) 1 year ago
Bagatur b5a57acf6c
lite llm lint (#9208) 1 year ago
Krish Dholakia 49f1d8477c
Adding ChatLiteLLM model (#9020)
Description: Adding a langchain integration for the LiteLLM library 
Tag maintainer: @hwchase17, @baskaryan
Twitter handle: @krrish_dh / @Berri_AI

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev 72f9150a50
Update 2 more pydantic imports (#9203)
Update two more pydantic imports to use v1 explicitly
1 year ago
Eugene Yurtsev c172f972ea
Create pydantic v1 namespace, add partial compatibility for pydantic v2 (#9123)
First of a few PRs to add full compatibility to both pydantic v1 and v2.

This PR creates pydantic v1 namespace and adds it to sys.modules.

Upcoming changes: 
1. Handle `openapi-schema-pydantic = "^1.2"` and dependent chains/tools
2. bump dependencies to versions that are cross compatible for pydantic
or remove them (see below)
3. Add tests to github workflows to test with pydantic v1 and v2

**Dependencies**

From a quick look (could be wrong since was done manually)

**dependencies pinning pydantic below 2** (some of these can be bumped
to newer versions are provide cross-compatible code)
anthropic
bentoml
confection
fastapi
langsmith
octoai-sdk
openapi-schema-pydantic
qdrant-client
spacy
steamship
thinc
zep-python

Unpinned

marqo (*)
nomic (*)
xinference(*)
1 year ago
Evan Schultz 8189dea0d8
Fixes typing issues in BaseOpenAI (#9183)
## Description: 

Sets default values for `client` and `model` attributes in the
BaseOpenAI class to fix Pylance Typing issue.

  - Issue: #9182.
  - Twitter handle: @evanmschultz
1 year ago
Massimiliano Pronesti d95eeaedbe
feat(llms): support vLLM's OpenAI-compatible server (#9179)
This PR aims at supporting [vLLM's OpenAI-compatible server
feature](https://vllm.readthedocs.io/en/latest/getting_started/quickstart.html#openai-compatible-server),
i.e. allowing to call vLLM's LLMs like if they were OpenAI's.

I've also udpated the related notebook providing an example usage. At
the moment, vLLM only supports the `Completion` API.
1 year ago
Michael Goin 621da3c164
Adds DeepSparse as an LLM (#9184)
Adds [DeepSparse](https://github.com/neuralmagic/deepsparse) as an LLM
backend. DeepSparse supports running various open-source sparsified
models hosted on [SparseZoo](https://sparsezoo.neuralmagic.com/) for
performance gains on CPUs.

Twitter handles: @mgoin_ @neuralmagic


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 0fa69d8988
Bagatur/zep python 1.0 (#9186)
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
1 year ago
Eugene Yurtsev 9b24f0b067
Enhance deprecation decorator to modify docs with sphinx directives (#9069)
Enhance deprecation decorator
1 year ago
Bagatur cdfe2c96c5
bump 263 (#9156) 1 year ago
Leonid Ganeline 19f504790e
docstrings: document_loaders consitency 2 (#9148)
This is Part 2. See #9139 (Part 1).
1 year ago
Harrison Chase 1b58460fe3
update keys for chain (#5164)
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
胡亮 7edf4ca396
Support multi gpu inference for HuggingFaceEmbeddings (#4732)
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
UmerHA 8aab39e3ce
Added SmartGPT workflow (issue #4463) (#4816)
# Added SmartGPT workflow by providing SmartLLM wrapper around LLMs
Edit:
As @hwchase17 suggested, this should be a chain, not an LLM. I have
adapted the PR.

It is used like this:
```
from langchain.prompts import PromptTemplate
from langchain.chains import SmartLLMChain
from langchain.chat_models import ChatOpenAI

hard_question = "I have a 12 liter jug and a 6 liter jug. I want to measure 6 liters. How do I do it?"
hard_question_prompt = PromptTemplate.from_template(hard_question)

llm = ChatOpenAI(model_name="gpt-4")
prompt = PromptTemplate.from_template(hard_question)
chain = SmartLLMChain(llm=llm, prompt=prompt, verbose=True)

chain.run({})
```


Original text: 
Added SmartLLM wrapper around LLMs to allow for SmartGPT workflow (as in
https://youtu.be/wVzuvf9D9BU). SmartLLM can be used wherever LLM can be
used. E.g:

```
smart_llm = SmartLLM(llm=OpenAI())
smart_llm("What would be a good company name for a company that makes colorful socks?")
```
or
```
smart_llm = SmartLLM(llm=OpenAI())
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=smart_llm, prompt=prompt)
chain.run("colorful socks")
```

SmartGPT consists of 3 steps:

1. Ideate - generate n possible solutions ("ideas") to user prompt
2. Critique - find flaws in every idea & select best one
3. Resolve - improve upon best idea & return it

Fixes #4463

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

- @hwchase17
- @agola11

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Lucas Pickup 1d3735a84c
Ensure deployment_id is set to provided deployment, required for Azure OpenAI. (#5002)
# Ensure deployment_id is set to provided deployment, required for Azure
OpenAI.
---------

Co-authored-by: Lucas Pickup <lupickup@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 45741bcc1b
Bagatur/vectara nit (#9140)
Co-authored-by: Ofer Mendelevitch <ofer@vectara.com>
1 year ago
Dominick DEV 9b64932e55
Add LangChain utility for real-time crypto exchange prices (#4501)
This commit adds the LangChain utility which allows for the real-time
retrieval of cryptocurrency exchange prices. With LangChain, users can
easily access up-to-date pricing information by running the command
".run(from_currency, to_currency)". This new feature provides a
convenient way to stay informed on the latest exchange rates and make
informed decisions when trading crypto.


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Joshua Sundance Bailey eaa505fb09
Create ArcGISLoader & example notebook (#8873)
- Description: Adds the ArcGISLoader class to
`langchain.document_loaders`
  - Allows users to load data from ArcGIS Online, Portal, and similar
- Users can authenticate with `arcgis.gis.GIS` or retrieve public data
anonymously
  - Uses the `arcgis.features.FeatureLayer` class to retrieve the data
  - Defines the most relevant keywords arguments and accepts `**kwargs`
- Dependencies: Using this class requires `arcgis` and, optionally,
`bs4.BeautifulSoup`.

Tagging maintainers:
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur e21152358a
fix (#9145) 1 year ago
Leonid Ganeline edb585228d
docstrings: document_loaders consitency (#9139)
Formatted docstrings from different formats to consistent format, lile:
>Loads processed docs from Docugami.
"Load from `Docugami`."

>Loader that uses Unstructured to load HTML files.
"Load `HTML` files using `Unstructured`."

>Load documents from a directory.
"Load from a directory."
 
- `Load` - no `Loads`
- DocumentLoader always loads Documents, so no more
"documents/docs/texts/ etc"
- integrated systems and APIs enclosed in backticks,
1 year ago
Markus Schiffer 00bf472265
Fix for SVM retriever discarding document metadata (#9141)
As stated in the title the SVM retriever discarded the metadata of
passed in docs. This code fixes that. I also added one unit test that
should test that.
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur bace17e0aa
rm integration deps (#9142) 1 year ago
Eugene Yurtsev 44bc89b7bf
Support a few list like operations on ChatPromptTemplate (#9077)
Make it easier to work with chat prompt template
1 year ago
Hai The Dude e4418d1b7e
Added new use case docs for Web Scraping, Chromium loader, BS4 transformer (#8732)
- Description: Added a new use case category called "Web Scraping", and
a tutorial to scrape websites using OpenAI Functions Extraction chain to
the docs.
  - Tag maintainer:@baskaryan @hwchase17 ,
- Twitter handle: https://www.linkedin.com/in/haiphunghiem/ (I'm on
LinkedIn mostly)

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
1 year ago
sseide 6cb763507c
add basic support for redis cluster server (#9128)
This change updates the central utility class to recognize a Redis
cluster server after connection and returns an new cluster aware Redis
client. The "normal" Redis client would not be able to talk to a cluster
node because keys might be stored on other shards of the Redis cluster
and therefor not readable or writable.

With this patch clients do not need to know what Redis server it is,
they just connect though the same API calls for standalone and cluster
server.

There are no dependencies added due to this MR.

Remark - with current redis-py client library (4.6.0) a cluster cannot
be used as VectorStore. It can be used for other use-cases. There is a
bug / missing feature(?) in the Redis client breaking the VectorStore
implementation. I opened an issue at the client library too
(redis/redis-py#2888) to fix this. As soon as this is fixed in
`redis-py` library it should be usable there too.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
David Duong 6d03f8b5d8
Add serialisable support for Replicate (#8525) 1 year ago
niklub 16af5f8690
Add LabelStudio integration (#8880)
This PR introduces [Label Studio](https://labelstud.io/) integration
with LangChain via `LabelStudioCallbackHandler`:

- sending data to the Label Studio instance
- labeling dataset for supervised LLM finetuning
- rating model responses
- tracking and displaying chat history
- support for custom data labeling workflow

### Example

```
chat_llm = ChatOpenAI(callbacks=[LabelStudioCallbackHandler(mode="chat")])
chat_llm([
    SystemMessage(content="Always use emojis in your responses."),
        HumanMessage(content="Hey AI, how's your day going?"),
    AIMessage(content="🤖 I don't have feelings, but I'm running smoothly! How can I help you today?"),
        HumanMessage(content="I'm feeling a bit down. Any advice?"),
    AIMessage(content="🤗 I'm sorry to hear that. Remember, it's okay to seek help or talk to someone if you need to. 💬"),
        HumanMessage(content="Can you tell me a joke to lighten the mood?"),
    AIMessage(content="Of course! 🎭 Why did the scarecrow win an award? Because he was outstanding in his field! 🌾"),
        HumanMessage(content="Haha, that was a good one! Thanks for cheering me up."),
    AIMessage(content="Always here to help! 😊 If you need anything else, just let me know."),
        HumanMessage(content="Will do! By the way, can you recommend a good movie?"),
])
```

<img width="906" alt="image"
src="https://github.com/langchain-ai/langchain/assets/6087484/0a1cf559-0bd3-4250-ad96-6e71dbb1d2f3">


### Dependencies
- [label-studio](https://pypi.org/project/label-studio/)
- [label-studio-sdk](https://pypi.org/project/label-studio-sdk/)

https://twitter.com/labelstudiohq

---------

Co-authored-by: nik <nik@heartex.net>
1 year ago
Bagatur 8cb2594562
Bagatur/dingo (#9079)
Co-authored-by: gary <1625721671@qq.com>
1 year ago
Jacques Arnoux 926c64da60
Fix web research retriever for unknown links in results (#9115)
Fixes an issue with web research retriever for unknown links in results.
This is currently making the retrieve crash sometimes.

@rlancemartin
1 year ago
Alvaro Bartolome f7ae183f40
`ArgillaCallbackHandler` to properly use default values for `api_url` and `api_key` (#9113)
As of the recent PR at #9043, after some testing we've realised that the
default values were not being used for `api_key` and `api_url`. Besides
that, the default for `api_key` was set to `argilla.apikey`, but since
the default values are intended for people using the Argilla Quickstart
(easy to run and setup), the defaults should be instead `owner.apikey`
if using Argilla 1.11.0 or higher, or `admin.apikey` if using a lower
version of Argilla.

Additionally, we've removed the f-string replacements from the
docstrings.

---------

Co-authored-by: Gabriel Martin <gabriel@argilla.io>
1 year ago
Bagatur 01ef786e7e
bump 262 (#9108) 1 year ago
Bagatur 3b754b5461
Bagatur/filter metadata (#9015)
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
1 year ago
Kim Minjong 7f0e847c13
Update pydantic format instruction prompt (#9095)
- remove unopened bracket
1 year ago
Ashutosh Sanzgiri 991b448dfc
minor edits (#9093)
Description:

Minor edit to PR#845

Thanks!
1 year ago
Bagatur 3ab4e21579
fix json tool (#9096) 1 year ago
Sam Groenjes 2184e3a400
Fix IndexError when input_list is Empty in prep_prompts (#5769)
This MR corrects the IndexError arising in prep_prompts method when no
documents are returned from a similarity search.

Fixes #1733 
Co-authored-by: Sam Groenjes <sam.groenjes@darkwolfsolutions.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Chenyu Zhao c0acbdca1b
Update Fireworks model names (#9085) 1 year ago
Bagatur b80e3825a6
Bagatur/pinecone by vector (#9087)
Co-authored-by: joseph <joe@outverse.com>
1 year ago
Nikhil Kumar 6abb2c2c08
Buffer method of ConversationTokenBufferMemory should be able to return messages as string (#7057)
### Description:
`ConversationBufferTokenMemory` should have a simple way of returning
the conversation messages as a string.

Previously to complete this, you would only have the option to return
memory as an array through the buffer method and call
`get_buffer_string` by importing it from `langchain.schema`, or use the
`load_memory_variables` method and key into `self.memory_key`.

### Maintainer
@hwchase17

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
William FH 57dd4daa9a
Add string example mapper (#9086)
Now that we accept any runnable or arbitrary function to evaluate, we
don't always look up the input keys. If an evaluator requires
references, we should try to infer if there's one key present. We only
have delayed validation here but it's better than nothing
1 year ago
Bidhan Roy 02430e25b6
BagelDB (bageldb.ai), VectorStore integration. (#8971)
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.

  - **Issue**: Not applicable.
  - **Dependencies**: `betabageldb` PyPi package.
  - **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
  - **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
  
We ran `make format`, `make lint` and `make test` locally.

Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

---------

Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
1 year ago
DJ Atha ee52482db8
Fix issue 7445 (#7635)
Description: updated BabyAGI examples and experimental to append the
iteration to the result id to fix error storing data to vectorstore.
Issue: 7445
Dependencies: no
Tag maintainer: @eyurtsev
This fix worked for me locally. Happy to take some feedback and iterate
on a better solution. I was considering appending a uuid instead but
didn't want to over complicate the example.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase bb6fbf4c71
openai adapters (#8988)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Harrison Chase 45f0f9460a
add async for python repl (#9080) 1 year ago
Neil Murphy 105c787e5a
Add convenience methods to ConversationBufferMemory and ConversationB… (#8981)
Add convenience methods to `ConversationBufferMemory` and
`ConversationBufferWindowMemory` to get buffer either as messages or as
string.

Helps when `return_messages` is set to `True` but you want access to the
messages as a string, and vice versa.

@hwchase17

One use case: Using a `MultiPromptRouter` where `default_chain` is
`ConversationChain`, but destination chains are `LLMChains`. Injecting
chat memory into prompts for destination chains prints a stringified
`List[Messages]` in the prompt, which creates a lot of noise. These
convenience methods allow caller to choose either as needed.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Zend 6221eb5974
Recursive url loader w/ test (#8813)
Description: Due to some issue on the test, this is a separate PR with
the test for #8502

Tag maintainer: @rlancemartin

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Junlin Zhou cb5fb751e9
Enhance regex of structured_chat agents' output parser (#8965)
Current regex only extracts agent's action between '` ``` ``` `', this
commit will extract action between both '` ```json ``` `' and '` ``` ```
`'

This is very similar to #7511 
Co-authored-by: zjl <junlinzhou@yzbigdata.com>
1 year ago
Bagatur 16bd328aab
Use Embeddings in pinecone (#8982)
cc @eyurtsev @olivier-lacroix @jamescalam 

redo of #2741
1 year ago
Piyush Jain 8eea46ed0e
Bedrock embeddings async methods (#9024)
## Description
This PR adds the `aembed_query` and `aembed_documents` async methods for
improving the embeddings generation for large documents. The
implementation uses asyncio tasks and gather to achieve concurrency as
there is no bedrock async API in boto3.

### Maintainers
@agola11 
@aarora79  

### Open questions
To avoid throttling from the Bedrock API, should there be an option to
limit the concurrency of the calls?
1 year ago
Eugene Yurtsev 67ca187560
Fix incorrect code blocks in documentation (#9060)
Fixes incorrect code block syntax in doc strings.
1 year ago
Eugene Yurtsev 46f3428cb3
Fix more incorrect code blocks in doc strings (#9073)
Fix 2 more incorrect code blocks in strings
1 year ago
Eugene Yurtsev a5a4c53280
RedisStore: Update init and Documentation updates (#9044)
* Update Redis Store to support init from parameters
* Update notebook to show how to use redis store, and some fixes in
documentation
1 year ago
Leonid Ganeline fcbbddedae
ArxivLoader fix for issue 9046 (#9061)
Fixed #9046 
Added ut-s for this fix.
 @eyurtsev
1 year ago
Mike Lambert e94a5d753f
Move from test to supported claude-instant-1 model (#9066)
Moves from "test" model to "claude-instant-1" model which is supported
and has actual capacity
1 year ago
Eugene Yurtsev b7bc8ec87f
Add excludes to FileSystemBlobLoader (#9064)
Add option to specify exclude patterns.

https://github.com/langchain-ai/langchain/discussions/9059
1 year ago
Eugene Yurtsev 6c70f491ba
ChatPromptTemplate pending deprecation proposal (#9004)
Pending deprecations for ChatPromptTemplate proposals
1 year ago
TRY-ER 2431eca700
Agent vector store tool doc (#9029)
I was initially confused weather to use create_vectorstore_agent or
create_vectorstore_router_agent due to lack of documentation so I
created a simple documentation for each of the function about their
different usecase.
Replace this comment with:
- Description: Added the doc_strings in create_vectorstore_agent and
create_vectorstore_router_agent to point out the difference in their
usecase
  - Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Alvaro Bartolome 08a0741d82
Update `ArgillaCallbackHandler` as of latest `argilla` release (#9043)
Hi @agola11, or whoever is reviewing this PR 😄 

## What's in this PR?

As of the latest Argilla release, we'll change and refactor some things
to make some workflows easier, one of those is how everything's pushed
to Argilla, so that now there's no need to call `push_to_argilla` over a
`FeedbackDataset` when either `push_to_argilla` is called for the first
time, or `from_argilla` is called; among others.

We also add some class variables to make sure those are easy to update
in case we update those internally in the future, also to make the
`warnings.warn` message lighter from the code view.

P.S. Regarding the Twitter/X mention feel free to do so at either
https://twitter.com/argilla_io or https://twitter.com/alvarobartt, or
both if applicable, otherwise, just the first Twitter/X handle.
1 year ago
Blake (Yung Cher Ho) 8d351bfc20
Takeoff integration (#9045)
## Description:
This PR adds the Titan Takeoff Server to the available LLMs in
LangChain.

Titan Takeoff is an inference server created by
[TitanML](https://www.titanml.co/) that allows you to deploy large
language models locally on your hardware in a single command. Most
generative model architectures are included, such as Falcon, Llama 2,
GPT2, T5 and many more.

Read more about Titan Takeoff here:
-
[Blog](https://medium.com/@TitanML/introducing-titan-takeoff-6c30e55a8e1e)
- [Docs](https://docs.titanml.co/docs/titan-takeoff/getting-started)

#### Testing
As Titan Takeoff runs locally on port 8000 by default, no network access
is needed. Responses are mocked for testing.

- [x] Make Lint
- [x] Make Format
- [x] Make Test

#### Dependencies
No new dependencies are introduced. However, users will need to install
the titan-iris package in their local environment and start the Titan
Takeoff inferencing server in order to use the Titan Takeoff
integration.

Thanks for your help and please let me know if you have any questions.

cc: @hwchase17 @baskaryan
1 year ago
Nuno Campos 3bdc273ab3
Implement .transform() in RunnablePassthrough() (#9032)
- This ensures passthrough doesnt break streaming
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 206f809366
fix sched ci (more) (#9056) 1 year ago
Ismail Pelaseyed abb1264edf
Fix issue with Metaphor Search Tool throwing error on missing keys in API response (#9051)
- Description: Fixes an issue with Metaphor Search Tool throwing when
missing keys in API response.
  - Issue: #9048 
  - Tag maintainer: @hinthornw @hwchase17 
  - Twitter handle: @pelaseyed
1 year ago
Eugene Yurtsev 5e05ba2140
Add embeddings cache (#8976)
This PR adds the ability to temporarily cache or persistently store
embeddings. 

A notebook has been included showing how to set up the cache and how to
use it with a vectorstore.
1 year ago
Bagatur 6e14f9548b
bump 261 (#9041) 1 year ago
Eugene Yurtsev d21333d710
Add redis storage (#8980)
Add a redis implementation of a BaseStore
1 year ago
Bagatur 434a96415b
make runnable dir (#9016)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Nuno Campos c7a489ae0d
Small improvements for tracer and debug output of runnables (#8683)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
1 year ago
Bagatur 15a5002746 Merge branch 'master' into bagatur/locals_in_config 1 year ago
Bagatur f8ed93e7bd Merge branch 'master' into bagatur/locals_in_config 1 year ago
EricFan 618cf5241e
Open file in UTF-8 encoding (#6919) (#8943)
FileCallbackHandler cannot handle some language, for example: Chinese. 
Open file using UTF-8 encoding can fix it.
@agola11
  
**Issue**: #6919 
**Dependencies**: NO dependencies,

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
colegottdank f4a47ec717
Add optional model kwargs to ChatAnthropic to allow overrides (#9013)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Kaizen bbbd2b076f
DirectoryLoader slicing (#8994)
DirectoryLoader can now return a random sample of files in a directory.
Parameters added are:
sample_size
randomize_sample
sample_seed


@rlancemartin, @eyurtsev

---------

Co-authored-by: Andrew Oseen <amovfx@protonmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
IanRogers-101Ways d248481f13
skip over empty google spreadsheets (#8974)
- Description: Allow GoogleDriveLoader to handle empty spreadsheets  
- Issue: Currently GoogleDriveLoader will crash if it tries to load a
spreadsheet with an empty sheet
  - Dependencies: n/a
  - Tag maintainer: @rlancemartin, @eyurtsev
1 year ago
Eugene Yurtsev efa02ed768
Suppress divide by zero wranings for cosine similarity (#9006)
Suppress run time warnings for divide by zero as the downstream code
handles the scenario (handling inf and nan)
1 year ago
Leonid Ganeline 5454591b0a
docstrings cleanup (#8993)
Added/Updated docstrings

 @baskaryan
1 year ago
Massimiliano Pronesti c72da53c10
Add logprobs to SamplingParameters in vllm (#9010)
This PR aims at amending #8806 , that I opened a few days ago, adding
the extra `logprobs` parameter that I accidentally forgot
1 year ago
Bagatur 8dd071ad08
import airbyte loaders (#9009) 1 year ago
Bagatur 05cdd22c39 merge 1 year ago
Bagatur eb0134fbb3 rfc 1 year ago
Bagatur 96d064e305
bump 260 (#9002) 1 year ago
Bagatur 50b13ab938 wip 1 year ago
Nuno Campos 808248049d
Implement a router for openai functions (#8589) 1 year ago
Eugene Yurtsev a6e6e9bb86
Fix airbyte loader (#8998)
Fix airbyte loader

https://github.com/langchain-ai/langchain/issues/8996
1 year ago
William FH 90579021f8
Update Key Check (#8948)
In eval loop. It needn't be done unless you are creating the
corresponding evaluators
1 year ago
Jerzy Czopek 539672a7fd
Feature/fix azureopenai model mappings (#8621)
This pull request aims to ensure that the `OpenAICallbackHandler` can
properly calculate the total cost for Azure OpenAI chat models. The
following changes have resolved this issue:

- The `model_name` has been added to the ChatResult llm_output. Without
this, the default values of `gpt-35-turbo` were applied. This was
causing the total cost for Azure OpenAI's GPT-4 to be significantly
inaccurate.
- A new parameter `model_version` has been added to `AzureChatOpenAI`.
Azure does not include the model version in the response. With the
addition of `model_name`, this is not a significant issue for GPT-4
models, but it's an issue for GPT-3.5-Turbo. Version 0301 (default) of
GPT-3.5-Turbo on Azure has a flat rate of 0.002 per 1k tokens for both
prompt and completion. However, version 0613 introduced a split in
pricing for prompt and completion tokens.
- The `OpenAICallbackHandler` implementation has been updated with the
proper model names, versions, and cost per 1k tokens.

Unit tests have been added to ensure the functionality works as
expected; the Azure ChatOpenAI notebook has been updated with examples.

Maintainers: @hwchase17, @baskaryan

Twitter handle: @jjczopek

---------

Co-authored-by: Jerzy Czopek <jerzy.czopek@avanade.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
shibuiwilliam 3adb1e12ca
make trajectory eval chain stricter and add unit tests (#8909)
- update trajectory eval logic to be stricter
- add tests to trajectory eval chain
1 year ago
Nuno Campos b8df15cd64
Adds transform support for runnables (#8762)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Harrison Chase 4d72288487
async output parser (#8894)
Co-authored-by: Nuno Campos <nuno@boringbits.io>
1 year ago
Bagatur 3c6eccd701
bump 259 (#8951) 1 year ago
Youngwook Kim 429de77b3b refactor(langchain): improve type annotations in url_playwright and its test 1 year ago
Harrison Chase 7de6a1b78e
parent document retriever (#8941) 1 year ago
Youngwook Kim 04fcd2d2e0 refactor(document_loaders): introduce PlaywrightEvaluator abstract base class for custom evalutors and add tests 1 year ago
Taqi Jaffri 5919c0f4a2 notebook cleanup 1 year ago
Taqi Jaffri bcdf3be530 Merge branch 'master' into tjaffri/docugami_loader_source 1 year ago
Youngwook Kim ef7f4aea32 refactor: modify method visibility in url_playwright 1 year ago
Youngwook Kim 224263aa24 refactor(document_loaders): modify evaluation methods in PlaywrightURLLoader 1 year ago
Youngwook Kim dc4b037957 docs(url_playwright): update docstrings for sync_evaluate_page and async_evaluate_page methods 1 year ago
Youngwook Kim 1fa5d94591 feat(document_loaders): add sync and async page evaluation methods to PlaywrightURLLoader 1 year ago
Aarav Borthakur 3f64b8a761
Integrate Rockset as a chat history store (#8940)
Description: Adds Rockset as a chat history store
Dependencies: no changes
Tag maintainer: @hwchase17

This PR passes linting and testing. 

I added a test for the integration and an example notebook showing its
use.
1 year ago
William FH e3056340da
Add id in error in tracer (#8944) 1 year ago
Bagatur 95cf7de112
scheduled tests GHA (#8879)
Adding scheduled daily GHA that runs marked integration tests. To start
just marking some tests in test_openai
1 year ago
Joe Reuter 8f0cd91d57
Airbyte based loaders (#8586)
This PR adds 8 new loaders:
* `AirbyteCDKLoader` This reader can wrap and run all python-based
Airbyte source connectors.
* Separate loaders for the most commonly used APIs:
  * `AirbyteGongLoader`
  * `AirbyteHubspotLoader`
  * `AirbyteSalesforceLoader`
  * `AirbyteShopifyLoader`
  * `AirbyteStripeLoader`
  * `AirbyteTypeformLoader`
  * `AirbyteZendeskSupportLoader`

## Documentation and getting started
I added the basic shape of the config to the notebooks. This increases
the maintenance effort a bit, but I think it's worth it to make sure
people can get started quickly with these important connectors. This is
also why I linked the spec and the documentation page in the readme as
these two contain all the information to configure a source correctly
(e.g. it won't suggest using oauth if that's avoidable even if the
connector supports it).

## Document generation
The "documents" produced by these loaders won't have a text part
(instead, all the record fields are put into the metadata). If a text is
required by the use case, the caller needs to do custom transformation
suitable for their use case.

## Incremental sync
All loaders support incremental syncs if the underlying streams support
it. By storing the `last_state` from the reader instance away and
passing it in when loading, it will only load updated records.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev 15f650ae8c
Add base storage interface, 2 implementations and utility encoder (#8895)
This PR defines an abstract interface for key value stores.

It provides 2 implementations: 
1. Local File System
2. In memory -- used to facilitate testing

It also provides an encoder utility to help take care of serialization
from arbitrary data to data that can be stored by the given store
1 year ago
Harrison Chase 7543a3d70e
Harrison/image (#845)
Co-authored-by: Ashutosh Sanzgiri <sanzgiri@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur ab193338aa
bump 258 (#8932) 1 year ago
Eugene Yurtsev bb12184551
Internal code deprecation API (#8763)
Proposal for an internal API to deprecate LangChain code.

This PR is heavily based on:
https://github.com/matplotlib/matplotlib/blob/main/lib/matplotlib/_api/deprecation.py

This PR only includes deprecation functionality (no renaming etc.). 
Additional functionality can be added on a need basis (e.g., renaming
parameters), but best to roll out as an MVP to test this
out.

DeprecationWarnings are ignored by default. We can change the policy for
the deprecation warnings, but we'll need to make sure we're not creating
noise for users due to internal code invoking deprecated functionality.
1 year ago
Leonid Ganeline 33a2f58fbf
`tensoflow_datasets` document loader (#8721)
This PR adds `tensoflow_datasets` document loader
1 year ago
Holt Skinner fad26e79a3
fix: Resolve `AttributeError` in Google Cloud Enterprise Search retriever (#8872)
- Reverting some of the changes made in
https://github.com/langchain-ai/langchain/pull/8369
1 year ago
William FH b2eb4ff0fc
Relax Validation in Eval (#8902)
Just check for missing keys
1 year ago
Leonid Ganeline 2d078c7767
`PubMed` document loader (#8893)
- added `PubMed Document Loader` artifacts; ut-s; examples 
- fixed `PubMed utility`; ut-s

@hwchase17
1 year ago
Ofer Mendelevitch a7824f16f2
Added consistent timeout for Vectara calls (#8892)
- Description: consistent timeout at 60s for all calls to Vectara API
- Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bagatur 642b57c7ff
nit (#8927) 1 year ago
manmax31 4a07fba9f0
Improve query prompt of BGE embeddings (#8908)
Replace this comment with:
- Description: Improved query of BGE embeddings after talking with the
devs of BGE embeddings ,
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @hwchase17 ,
  - Twitter handle: @ManabChetia3

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
1 year ago
Chris Pappalardo beab637f04
added filter kwarg to VectorStoreIndexWrapper query and query_with_so… (#8844)
- Description: added filter to query methods in VectorStoreIndexWrapper
for filtering by metadata (i.e. search_kwargs)
- Tag maintainer: @rlancemartin, @eyurtsev

Updated the doc snippet on this topic as well. It took me a long while
to figure out how to filter the vectorstore by filename, so this might
help someone else out.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
David vonThenen bf4a112aa6
Fixes to the Nebula LLM Integration (#8918)
This addresses some issues with introducing the Nebula LLM to LangChain
in this PR:
https://github.com/langchain-ai/langchain/pull/8876

This fixes the following:
- Removes `SYMBLAI` from variable names
- Fixes bug with `Bearer` for the API KEY


Thanks again in advance for your help!
cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
1 year ago
Marie-Philippe Gill 6b9f266837
Add user_context to AmazonKendraRetriever (#8869)
### Description 

Now, we can pass information like a JWT token using user_context:  

```python
self.retriever = AmazonKendraRetriever(index_id=kendraIndexId, user_context={"Token": jwt_token})
```

- [x] `make lint`
- [x] `make format`
- [x] `make test`

Also tested by pip installing in my own project, and it allows access
through the token.

### Maintainers 

 @rlancemartin, @eyurtsev

### My twitter handle 

[girlknowstech](https://twitter.com/girlknowstech)
1 year ago
GitHub-L 67718c1d6b
Update OpenAPI code to fetch use the requestBody
- Description: The API doc passed to LLM only included the content of
responses but did not include the content of requestBody, causing the
agent to be unable to construct the correct request parameters based on
the requestBody information. Add two lines of code fixed the bug,
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
  - Tag maintainer: @hinthornw ,
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!
1 year ago
Leonid Kuligin 52d6b91c18
Fixed a source for documents uploaded from GCS (#8912)
Sets source for documents uploaded from GCS to source on gcs
#8911

Co-authored-by: Leonid Kuligin <kuligin@google.com>
1 year ago
Bagatur 022ef170f8
bump 257 (#8903) 1 year ago
Jacob Lee fa30a57034
Adds Ollama as an LLM (#8829)
Adds Ollama as an LLM. Ollama can run various open source models locally
e.g. Llama 2 and Vicuna, automatically configuring and GPU-optimizing
them.

@rlancemartin @hwchase17

---------

Co-authored-by: Lance Martin <lance@langchain.dev>
1 year ago
Ash Vardanian 1f9124ceaa
Add: USearch Vector Store (#8835)
## Description

I am excited to propose an integration with USearch, a lightweight
vector-search engine available for both Python and JavaScript, among
other languages.

## Dependencies

It introduces a new PyPi dependency - `usearch`. I am unsure if it must
be added to the Poetry file, as this would make the PR too clunky.
Please let me know.

## Profiles

- Maintainers: @ashvardanian @davvard
- Twitter handles: @ashvardanian @unum_cloud

---------

Co-authored-by: Davit Vardanyan <78792753+davvard@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Leonid Kuligin b52a3785c9
Allow to specify a custom loader for GcsFileLoader (#8868)
Co-authored-by: Leonid Kuligin <kuligin@google.com>
1 year ago
Bruno Bornsztein d56eff042a
Make json output parser handle newlines inside markdown code blocks (#8682)
Update to #8528

Newlines and other special characters within markdown code blocks
returned as `action_input` should be handled correctly (in particular,
unescaped `"` => `\"` and `\n` => `\\n`) so they don't break JSON
parsing.

@baskaryan
1 year ago
Oege Dijk cff52638b2
when encountering error during fetch return "" in web_base.py (#8753)
when e.g. downloading a sitemap with a malformed url (e.g.
"ttp://example.com/index.html" with the h omitted at the beginning of
the url), this will ensure that the sitemap download does not crash, but
just emits a warning. (maybe should be optional with e.g. a
`skip_faulty_urls:bool=True` parameter, but this was the most
straightforward fix)

@rlancemartin, @eyurtsev
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Bennji94 33cdb06b5c
Async RetryOutputParser, RetryWithErrorOutputParser and OutputFixingParser (#8776)
Added async parsing functions for RetryOutputParser,
RetryWithErrorOutputParser and OutputFixingParser.

The async parse functions call the arun methods of the used LLMChains.

Fix for #7989

---------

Co-authored-by: Benjamin May <benjamin.may94@gmail.com>
1 year ago
Joshua Sundance Bailey 7fc07ba5df
Create ChatAnyscale (#8770)
- Description: Adds the ChatAnyscale class with llama-2 7b, llama-2 13b,
and llama-2 70b on [Anyscale
Endpoints](https://app.endpoints.anyscale.com/)
- It inherits from ChatOpenAI and requires openai (probably unnecessary
but it made for a quick and easy implementation)
- Inspired by https://github.com/langchain-ai/langchain/pull/8434
(@kylehh and @baskaryan )
1 year ago
idcore fe78aff1f2
Add new parameter forced_decoder_ids to OpenAIWhisperParserLocal + small bug fix (#8793)
- Description: new parameter forced_decoder_ids for
OpenAIWhisperParserLocal to force input language, and enable optional
translate mode. Usage example:
processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
forced_decoder_ids = processor.get_decoder_prompt_ids(language="french",
task="transcribe")
#forced_decoder_ids =
processor.get_decoder_prompt_ids(language="french", task="translate")
loader = GenericLoader(YoutubeAudioLoader(urls, save_dir),
OpenAIWhisperParserLocal(lang_model="openai/whisper-medium",forced_decoder_ids=forced_decoder_ids))
  - Issue #8792
  - Tag maintainer: @rlancemartin, @eyurtsev

---------

Co-authored-by: idcore <eugene.novozhilov@gmail.com>
1 year ago
David vonThenen 40079d4936
Introduce Nebula LLM to LangChain (#8876)
## Description

This PR adds Nebula to the available LLMs in LangChain.

Nebula is an LLM focused on conversation understanding and enables users
to extract conversation insights from video, audio, text, and chat-based
conversations. These conversations can occur between any mix of human or
AI participants.

Examples of some questions you could ask Nebula from a given
conversation are:
- What could be the customer’s pain points based on the conversation?
- What sales opportunities can be identified from this conversation?
- What best practices can be derived from this conversation for future
customer interactions?

You can read more about Nebula here:

https://symbl.ai/blog/extract-insights-symbl-ai-generative-ai-recall-ai-meetings/

#### Integration Test 

An integration test is added, but it requires network access. Since
Nebula is fully managed like OpenAI, network access is required to
exercise the integration test.

#### Linting

- [x] make lint
- [x] make test (TODO: there seems to be a failure in another
non-related test??? Need to check on this.)
- [x] make format

### Dependencies

No new dependencies were introduced.

### Twitter handle

[@symbldotai](https://twitter.com/symbldotai)
[@dvonthenen](https://twitter.com/dvonthenen)


If you have any questions, please let me know.

cc: @hwchase17, @baskaryan

---------

Co-authored-by: dvonthenen <david.vonthenen@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Eugene Yurtsev f616aee35a
JsonOutputFunctionParser: Fix mutation in place bug (#8758)
Fixes mutation in place in the JsonOutputFunctionParser. This causes
issues when trying to re-use the original AI message.
1 year ago
shibuiwilliam ab47557db3
fix evaluation parse test (#8859)
# What
- fix evaluation parse test

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Fix evaluation parse test
  - Issue: None
  - Dependencies: None
  - Tag maintainer: @baskaryan
  - Twitter handle: @MLOpsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
1 year ago
manmax31 40096c73cd
Add BGE embeddings support (#8848)
- Description: [BGE-large](https://huggingface.co/BAAI/bge-large-en)
embeddings from BAAI are at the top of [MTEB
leaderboard](https://huggingface.co/spaces/mteb/leaderboard). Hence
adding support for it.
- Tag maintainer: @baskaryan
- Twitter handle: @ManabChetia3

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
shibuiwilliam fbc83dfdbb
Fix/abstract add message (#8856)
<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: Fix/abstract add message
  - Issue: None
  - Dependencies: None
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
  - Twitter handle: @MLOpsJ

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->
1 year ago
William FH 91be7eee66
Add concurrency support for run_on_dataset (#8841)
Long-term, would be better to use the lower-level batch() method(s) but
it may take me a bit longer to clean up. This unblocks in the meantime,
though it may fail when the evaluated chain raises a
`NotImplementedError` for a corresponding async method
1 year ago
Bagatur fc2f450f2d
bump 256 (#8870) 1 year ago
Tudor Golubenco aeaef8f3a3
Add support for Xata as a vector store (#8822)
This adds support for [Xata](https://xata.io) (data platform based on
Postgres) as a vector store. We have recently added [Xata to
Langchain.js](https://github.com/hwchase17/langchainjs/pull/2125) and
would love to have the equivalent in the Python project as well.

The PR includes integration tests and a Jupyter notebook as docs. Please
let me know if anything else would be needed or helpful.

I have added the xata python SDK as an optional dependency.

## To run the integration tests

You will need to create a DB in xata (see the docs), then run something
like:

```
OPENAI_API_KEY=sk-... XATA_API_KEY=xau_... XATA_DB_URL='https://....xata.sh/db/langchain'  poetry run pytest tests/integration_tests/vectorstores/test_xata.py
```

<!-- Thank you for contributing to LangChain!

Replace this comment with:
  - Description: a description of the change, 
  - Issue: the issue # it fixes (if applicable),
  - Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer
(see below),
- Twitter handle: we announce bigger features on Twitter. If your PR
gets announced and you'd like a mention, we'll gladly shout you out!

Please make sure you're PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
  2. an example notebook showing its use.

Maintainer responsibilities:
  - General / Misc / if you don't know who to tag: @baskaryan
  - DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
  - Models / Prompts: @hwchase17, @baskaryan
  - Memory: @hwchase17
  - Agents / Tools / Toolkits: @hinthornw
  - Tracing / Callbacks: @agola11
  - Async: @agola11

If no one reviews your PR within a few days, feel free to @-mention the
same people again.

See contribution guidelines for more information on how to write/run
tests, lint, etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Philip Krauss <35487337+philkra@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
1 year ago
Leonid Kuligin 6e3fa59073
Added chat history to codey models (#8831)
#7469

since 1.29.0, Vertex SDK supports a chat history provided to a codey
chat model.

Co-authored-by: Leonid Kuligin <kuligin@google.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago
Massimiliano Pronesti a616e19975
feat(llms): add support for vLLM (#8806)
Hello langchain maintainers, 
this PR aims at integrating
[vllm](https://vllm.readthedocs.io/en/latest/#) into langchain. This PR
closes #8729.

This feature clearly depends on `vllm`, but I've seen other models
supported here depend on packages that are not included in the
pyproject.toml (e.g. `gpt4all`, `text-generation`) so I thought it was
the case for this as well.

@hwchase17, @baskaryan

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
1 year ago