Commit Graph

74 Commits

Author SHA1 Message Date
Joseph McElroy
175ef0a55d
[ElasticsearchStore] Enable custom Bulk Args (#11065)
This enables bulk args like `chunk_size` to be passed down from the
ingest methods (from_text, from_documents) to be passed down to the bulk
API.

This helps alleviate issues where bulk importing a large amount of
documents into Elasticsearch was resulting in a timeout.

Contribution Shoutout
- @elastic

- [x] Updated Integration tests

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-26 12:53:50 -07:00
Greg Richardson
4eee789dd3
Docs: Using SupabaseVectorStore with existing documents (#10907)
## Description
Adds additional docs on how to use `SupabaseVectorStore` with existing
data in your DB (vs inserting new documents each time).
2023-09-22 08:18:56 -07:00
Matvey Arye
6e02c45ca4
Add integration for Timescale Vector(Postgres) (#10650)
**Description:**
This commit adds a vector store for the Postgres-based vector database
(`TimescaleVector`).

Timescale Vector(https://www.timescale.com/ai) is PostgreSQL++ for AI
applications. It enables you to efficiently store and query billions of
vector embeddings in `PostgreSQL`:
- Enhances `pgvector` with faster and more accurate similarity search on
1B+ vectors via DiskANN inspired indexing algorithm.
- Enables fast time-based vector search via automatic time-based
partitioning and indexing.
- Provides a familiar SQL interface for querying vector embeddings and
relational data.

Timescale Vector scales with you from POC to production:
- Simplifies operations by enabling you to store relational metadata,
vector embeddings, and time-series data in a single database.
- Benefits from rock-solid PostgreSQL foundation with enterprise-grade
feature liked streaming backups and replication, high-availability and
row-level security.
- Enables a worry-free experience with enterprise-grade security and
compliance.

Timescale Vector is available on Timescale, the cloud PostgreSQL
platform. (There is no self-hosted version at this time.) LangChain
users get a 90-day free trial for Timescale Vector.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Avthar Sewrathan <avthar@timescale.com>
2023-09-21 07:33:37 -07:00
zhanghexian
0abe996409
add clustered vearch in langchain (#10771)
---------

Co-authored-by: zhanghexian1 <zhanghexian1@jd.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-09-19 21:22:23 -07:00
Anar
c656a6b966
LLMRails (#10796)
### LLMRails Integration
This PR provides integration with LLMRails. Implemented here are:

langchain/vectorstore/llm_rails.py
tests/integration_tests/vectorstores/test_llm_rails.py
docs/extras/integrations/vectorstores/llm-rails.ipynb

---------

Co-authored-by: Anar Aliyev <aaliyev@mgmt.cloudnet.services>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-19 20:33:33 -07:00
Harrison Chase
d2bee34d4c
Harrison/add vald (#10807)
Co-authored-by: datelier <57349093+datelier@users.noreply.github.com>
2023-09-19 16:42:52 -07:00
Harrison Chase
5442d2b1fa
Harrison/stop importing from init (#10690) 2023-09-16 17:22:48 -07:00
Bagatur
2ae568dcf5
Separate platforms integrations docs (#10609) 2023-09-15 12:18:57 -07:00
Aashish Saini
7608f85f13
Removed duplicate heading (#10570)
**I recently reviewed the content and identified that there heading
appeared twice on the docs.**
2023-09-14 12:35:37 -07:00
Bagatur
7f3f6097e7
Add mmr support to redis retriever (#10556) 2023-09-14 08:43:50 -07:00
Tomaz Bratanic
e1e01d6586
Add Neo4j vector index hybrid search (#10442)
Adding support for Neo4j vector index hybrid search option. In Neo4j,
you can achieve hybrid search by using a combination of vector and
fulltext indexes.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-14 08:29:16 -07:00
Stefano Lottini
415d38ae62
Cassandra Vector Store, add metadata filtering + improvements (#9280)
This PR addresses a few minor issues with the Cassandra vector store
implementation and extends the store to support Metadata search.

Thanks to the latest cassIO library (>=0.1.0), metadata filtering is
available in the store.

Further,
- the "relevance" score is prevented from being flipped in the [0,1]
interval, thus ensuring that 1 corresponds to the closest vector (this
is related to how the underlying cassIO class returns the cosine
difference);
- bumped the cassIO package version both in the notebooks and the
pyproject.toml;
- adjusted the textfile location for the vector-store example after the
reshuffling of the Langchain repo dir structure;
- added demonstration of metadata filtering in the Cassandra vector
store notebook;
- better docstring for the Cassandra vector store class;
- fixed test flakiness and removed offending out-of-place escape chars
from a test module docstring;

To my knowledge all relevant tests pass and mypy+black+ruff don't
complain. (mypy gives unrelated errors in other modules, which clearly
don't depend on the content of this PR).

Thank you!
Stefano

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-13 14:18:39 -07:00
Christopher Pereira
4c732c8894
Fixed documentation (#10451)
It's ._collection, not ._collection_
2023-09-11 11:51:58 -07:00
John Mai
e0d45e6a09
Implemented MMR search for PGVector (#10396)
Description: Implemented MMR search for PGVector.
Issue: #7466
Dependencies: None
Tag maintainer: 
Twitter handle: @JohnMai95
2023-09-09 15:26:22 -07:00
zhanghexian
62fa2bc518
Add Vearch vectorstore (#9846)
---------

Co-authored-by: zhanghexian1 <zhanghexian1@jd.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-09-08 16:51:14 -07:00
Bagatur
7203c97e8f
Add redis self-query support (#10199) 2023-09-08 16:43:16 -07:00
Hamza Tahboub
8c0f391815
Implemented MMR search for Redis (#10140)
Description: Implemented MMR search for Redis. Pretty straightforward,
just using the already implemented MMR method on similarity
search–fetched docs.
Issue: #10059
Dependencies: None
Twitter handle: @hamza_tahboub

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-09-08 15:14:44 -07:00
Greg Richardson
300559695b
Supabase vector self querying retriever (#10304)
## Description
Adds Supabase Vector as a self-querying retriever.

- Designed to be backwards compatible with existing `filter` logic on
`SupabaseVectorStore`.
- Adds new filter `postgrest_filter` to `SupabaseVectorStore`
`similarity_search()` methods
- Supports entire PostgREST [filter query
language](https://postgrest.org/en/stable/references/api/tables_views.html#read)
(used by self-querying retriever, but also works as an escape hatch for
more query control)
- `SupabaseVectorTranslator` converts Langchain filter into the above
PostgREST query
- Adds Jupyter Notebook for the self-querying retriever
- Adds tests

## Tag maintainer
@hwchase17

## Twitter handle
[@ggrdson](https://twitter.com/ggrdson)
2023-09-07 15:03:26 -07:00
Ofer Mendelevitch
a9eb7c6cfc
Adding Self-querying for Vectara (#10332)
- Description: Adding support for self-querying to Vectara integration
  - Issue: per customer request
  - Tag maintainer: @rlancemartin @baskaryan 
  - Twitter handle: @ofermend 

Also updated some documentation, added self-query testing, and a demo
notebook with self-query example.
2023-09-07 10:24:50 -07:00
Bagatur
f0ccce76fe
nuclia db nit (#10334) 2023-09-07 09:48:56 -07:00
Bagatur
205f406485
nuclia nb nit (#10331) 2023-09-07 08:49:33 -07:00
Eric BREHAULT
19b4ecdc39
Implement NucliaDB vector store (#10236)
# Description

This pull request allows to use the
[NucliaDB](https://docs.nuclia.dev/docs/docs/nucliadb/intro) as a vector
store in LangChain.

It works with both a [local NucliaDB
instance](https://docs.nuclia.dev/docs/docs/nucliadb/deploy/basics) or
with [Nuclia Cloud](https://nuclia.cloud).

# Dependencies

It requires an up-to-date version of the `nuclia` Python package.

@rlancemartin, @eyurtsev, @hinthornw, please review it when you have a
moment :)

Note: our Twitter handler is `@NucliaAI`
2023-09-06 16:26:14 -07:00
Pihplipe Oegr
bce38b7163
Add notebook example to use sqlite-vss as a vector store. (#10292)
Follow-up PR for https://github.com/langchain-ai/langchain/pull/10047,
simply adding a notebook quickstart example for the vector store with
SQLite, using the class SQLiteVSS.

Maintainer tag @baskaryan

Co-authored-by: Philippe Oger <philippe.oger@adevinta.com>
2023-09-06 13:46:59 -07:00
Aashish Saini
8bba69ffd0
Fixed some grammatical typos in doc files (#10191)
Fixed some grammatical typos in doc files
CC: @baskaryan, @eyurtsev, @rlancemartin.

---------

Co-authored-by: Aashish Saini <141953346+AashishSainiShorthillsAI@users.noreply.github.com>
Co-authored-by: AryamanJaiswalShorthillsAI <142397527+AryamanJaiswalShorthillsAI@users.noreply.github.com>
Co-authored-by: Adarsh Shrivastav <142413097+AdarshKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: Vishal <141389263+VishalYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: ChetnaGuptaShorthillsAI <142381084+ChetnaGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: PankajKumarShorthillsAI <142473460+PankajKumarShorthillsAI@users.noreply.github.com>
Co-authored-by: AbhishekYadavShorthillsAI <142393903+AbhishekYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: AmitSinghShorthillsAI <142410046+AmitSinghShorthillsAI@users.noreply.github.com>
Co-authored-by: Md Nazish Arman <142379599+MdNazishArmanShorthillsAI@users.noreply.github.com>
Co-authored-by: KamalSharmaShorthillsAI <142474019+KamalSharmaShorthillsAI@users.noreply.github.com>
Co-authored-by: Lakshya <lakshyagupta87@yahoo.com>
Co-authored-by: Aayush <142384656+AayushShorthillsAI@users.noreply.github.com>
Co-authored-by: AnujMauryaShorthillsAI <142393269+AnujMauryaShorthillsAI@users.noreply.github.com>
2023-09-04 10:48:08 -07:00
Leonid Ganeline
056e59672b
docs: DeepLake example (#9663)
Updated the `Deep Lake` example. Added a link to an example provided by
Activeloop.
2023-09-03 20:42:52 -07:00
Bagatur
3efab8d3df
implement vectorstores by tencent vectordb (#9989)
Hi there!
I'm excited to open this PR to add support for using 'Tencent Cloud
VectorDB' as a vector store.

Tencent Cloud VectorDB is a fully-managed, self-developed,
enterprise-level distributed database service designed for storing,
retrieving, and analyzing multi-dimensional vector data. The database
supports multiple index types and similarity calculation methods, with a
single index supporting vector scales up to 1 billion and capable of
handling millions of QPS with millisecond-level query latency. Tencent
Cloud VectorDB not only provides external knowledge bases for large
models to improve their accuracy, but also has wide applications in AI
fields such as recommendation systems, NLP services, computer vision,
and intelligent customer service.

The PR includes:
 Implementation of Vectorstore.

I have read your [contributing
guidelines](72b7d76d79/.github/CONTRIBUTING.md).
And I have passed the tests below

 make format
 make lint
 make coverage
 make test
2023-08-31 00:48:25 -07:00
Bagatur
b1644bc9ad cr 2023-08-31 00:43:34 -07:00
Cameron Vetter
e37d51cab6
fix scoring profile example (#10016)
- Description: A change in the documentation example for Azure Cognitive
Vector Search with Scoring Profile so the example works as written
  - Issue: #10015 
  - Dependencies: None
  - Tag maintainer: @baskaryan @ruoccofabrizio
  - Twitter handle: @poshporcupine
2023-08-31 00:35:06 -07:00
wlleiiwang
8c4e29240c implement vectorstores by tencent vectordb 2023-08-30 16:40:58 +08:00
Bagatur
16eb935469
Fix for similarity_search_with_score (#9903)
- Description: the implementation for similarity_search_with_score did
not actually include a score or logic to filter. Now fixed.
- Tag maintainer: @rlancemartin
- Twitter handle: @ofermend
2023-08-29 15:04:48 -07:00
Tomaz Bratanic
db13fba7ea
Add neo4j vector support (#9770)
Neo4j has added vector index integration just recently. To allow both
ingestion and integrating it as vector RAG applications, I wrapped it as
a vector store as the implementation is completely different from
`GraphCypherQAChain`. Here, we are not generating any Cypher statements
at query time, we are simply doing the vector similarity search using
the new vector index as if we were dealing with a vector database.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-29 07:54:20 -07:00
Tudor Golubenco
171b0b183b
Pre-release Xata version no longer required (#9915)
Tiny PR: Since we've released version 1.0.0 of the python SDK, we no
longer need to specify the pre-release version when pip installing.
2023-08-29 07:21:22 -07:00
Ofer Mendelevitch
8b8d2a6535 fixed similarity_search_with_score to really use a score
updated unit test with a test for score threshold
Updated demo notebook
2023-08-28 22:26:55 -07:00
Sam Partee
a28eea5767
Redis metadata filtering and specification, index customization (#8612)
### Description

The previous Redis implementation did not allow for the user to specify
the index configuration (i.e. changing the underlying algorithm) or add
additional metadata to use for querying (i.e. hybrid or "filtered"
search).

This PR introduces the ability to specify custom index attributes and
metadata attributes as well as use that metadata in filtered queries.
Overall, more structure was introduced to the Redis implementation that
should allow for easier maintainability moving forward.

# New Features

The following features are now available with the Redis integration into
Langchain

## Index schema generation

The schema for the index will now be automatically generated if not
specified by the user. For example, the data above has the multiple
metadata categories. The the following example

```python

from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.redis import Redis

embeddings = OpenAIEmbeddings()


rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users"
)
```

Loading the data in through this and the other ``from_documents`` and
``from_texts`` methods will now generate index schema in Redis like the
following.

view index schema with the ``redisvl`` tool. [link](redisvl.com)

```bash
$ rvl index info -i users
```


Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|---------------|-----------------|------------|
| users | HASH | ['doc:users'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |


### Custom Metadata specification

The metadata schema generation has the following rules
1. All text fields are indexed as text fields.
2. All numeric fields are index as numeric fields.

If you would like to have a text field as a tag field, users can specify
overrides like the following for the example data

```python

# this can also be a path to a yaml file
index_schema = {
    "text": [{"name": "user"}, {"name": "job"}],
    "tag": [{"name": "credit_score"}],
    "numeric": [{"name": "age"}],
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users"
)
```
This will change the index specification to 

Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|----------------|-----------------|------------|
| users2 | HASH | ['doc:users2'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |


and throw a warning to the user (log output) that the generated schema
does not match the specified schema.

```text
index_schema does not match generated schema from metadata.
index_schema: {'text': [{'name': 'user'}, {'name': 'job'}], 'tag': [{'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
generated_schema: {'text': [{'name': 'user'}, {'name': 'job'}, {'name': 'credit_score'}], 'numeric': [{'name': 'age'}]}
```

As long as this is on purpose,  this is fine.

The schema can be defined as a yaml file or a dictionary

```yaml

text:
  - name: user
  - name: job
tag:
  - name: credit_score
numeric:
  - name: age

```

and you pass in a path like

```python
rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    index_schema=Path("sample1.yml").resolve()
)
```

Which will create the same schema as defined in the dictionary example


Index Information:
| Index Name | Storage Type | Prefixes | Index Options | Indexing |

|--------------|----------------|----------------|-----------------|------------|
| users3 | HASH | ['doc:users3'] | [] | 0 |
Index Fields:
| Name | Attribute | Type | Field Option | Option Value |

|----------------|----------------|---------|----------------|----------------|
| user | user | TEXT | WEIGHT | 1 |
| job | job | TEXT | WEIGHT | 1 |
| content | content | TEXT | WEIGHT | 1 |
| credit_score | credit_score | TAG | SEPARATOR | , |
| age | age | NUMERIC | | |
| content_vector | content_vector | VECTOR | | |



### Custom Vector Indexing Schema

Users with large use cases may want to change how they formulate the
vector index created by Langchain

To utilize all the features of Redis for vector database use cases like
this, you can now do the following to pass in index attribute modifiers
like changing the indexing algorithm to HNSW.

```python
vector_schema = {
    "algorithm": "HNSW"
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    vector_schema=vector_schema
)

```

A more complex example may look like

```python
vector_schema = {
    "algorithm": "HNSW",
    "ef_construction": 200,
    "ef_runtime": 20
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users3",
    vector_schema=vector_schema
)
```

All names correspond to the arguments you would set if using Redis-py or
RedisVL. (put in doc link later)


### Better Querying

Both vector queries and Range (limit) queries are now available and
metadata is returned by default. The outputs are shown.

```python
>>> query = "foo"
>>> results = rds.similarity_search(query, k=1)
>>> print(results)
[Document(page_content='foo', metadata={'user': 'derrick', 'job': 'doctor', 'credit_score': 'low', 'age': '14', 'id': 'doc:users:657a47d7db8b447e88598b83da879b9d', 'score': '7.15255737305e-07'})]

>>> results = rds.similarity_search_with_score(query, k=1, return_metadata=False)
>>> print(results) # no metadata, but with scores
[(Document(page_content='foo', metadata={}), 7.15255737305e-07)]

>>> results = rds.similarity_search_limit_score(query, k=6, score_threshold=0.0001)
>>> print(len(results)) # range query (only above threshold even if k is higher)
4
```

### Custom metadata filtering

A big advantage of Redis in this space is being able to do filtering on
data stored alongside the vector itself. With the example above, the
following is now possible in langchain. The equivalence operators are
overridden to describe a new expression language that mimic that of
[redisvl](redisvl.com). This allows for arbitrarily long sequences of
filters that resemble SQL commands that can be used directly with vector
queries and range queries.

There are two interfaces by which to do so and both are shown. 

```python

>>> from langchain.vectorstores.redis import RedisFilter, RedisNum, RedisText

>>> age_filter = RedisFilter.num("age") > 18
>>> age_filter = RedisNum("age") > 18 # equivalent
>>> results = rds.similarity_search(query, filter=age_filter)
>>> print(len(results))
3

>>> job_filter = RedisFilter.text("job") == "engineer" 
>>> job_filter = RedisText("job") == "engineer" # equivalent
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2

# fuzzy match text search
>>> job_filter = RedisFilter.text("job") % "eng*"
>>> results = rds.similarity_search(query, filter=job_filter)
>>> print(len(results))
2


# combined filters (AND)
>>> combined = age_filter & job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
1

# combined filters (OR)
>>> combined = age_filter | job_filter
>>> results = rds.similarity_search(query, filter=combined)
>>> print(len(results))
4
```

All the above filter results can be checked against the data above.


### Other

  - Issue: #3967 
  - Dependencies: No added dependencies
  - Tag maintainer: @hwchase17 @baskaryan @rlancemartin 
  - Twitter handle: @sampartee

---------

Co-authored-by: Naresh Rangan <naresh.rangan0@walmart.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-25 17:22:50 -07:00
Fabrizio Ruocco
cacaf487c3
Azure Cognitive Search - update sdk b8, mod user agent, search with scores (#9191)
Description: Update Azure Cognitive Search SDK to version b8 (breaking
change)
Customizable User Agent.
Implemented Similarity search with scores 

@baskaryan

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-25 02:34:09 -07:00
Lakshay Kansal
a8c916955f
Updates to Nomic Atlas and GPT4All documentation (#9414)
Description: Updates for Nomic AI Atlas and GPT4All integrations
documentation.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-23 17:49:44 -07:00
Keras Conv3d
cbaea8d63b
tair fix distance_type error, and add hybrid search (#9531)
- fix: distance_type error, 
- feature: Tair add hybrid search

---------

Co-authored-by: thw <hanwen.thw@alibaba-inc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-23 16:38:31 -07:00
Joseph McElroy
2a06e7b216
ElasticsearchStore: improve error logging for adding documents (#9648)
Not obvious what the error is when you cannot index. This pr adds the
ability to log the first errors reason, to help the user diagnose the
issue.

Also added some more documentation for when you want to use the
vectorstore with an embedding model deployed in elasticsearch.

Credit: @elastic and @phoey1
2023-08-23 07:04:09 -07:00
Leonid Ganeline
e1f4f9ac3e
docs: integrations/providers (#9631)
Added missed pages for `integrations/providers` from `vectorstores`.
Updated several `vectorstores` notebooks.
2023-08-22 20:28:11 -07:00
Matthew Zeiler
949b2cf177
Improvements to the Clarifai integration (#9290)
- Improved docs
- Improved performance in multiple ways through batching, threading,
etc.
 - fixed error message 
 - Added support for metadata filtering during similarity search.

@baskaryan PTAL
2023-08-21 12:53:36 -07:00
ricki-epsilla
66a47d9a61
add Epsilla vectorstore (#9239)
[Epsilla](https://github.com/epsilla-cloud/vectordb) vectordb is an
open-source vector database that leverages the advanced academic
parallel graph traversal techniques for vector indexing.
This PR adds basic integration with
[pyepsilla](https://github.com/epsilla-cloud/epsilla-python-client)(Epsilla
vectordb python client) as a vectorstore.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-21 12:51:15 -07:00
Daniel Chalef
1d55141c50
zep/new ZepVectorStore (#9159)
- new ZepVectorStore class
- ZepVectorStore unit tests
- ZepVectorStore demo notebook
- update zep-python to ~1.0.2

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-16 00:23:07 -07:00
Xiaoyu Xee
b30f449dae
Add dashvector vectorstore (#9163)
## Description
Add `Dashvector` vectorstore for langchain

- [dashvector quick
start](https://help.aliyun.com/document_detail/2510223.html)
- [dashvector package description](https://pypi.org/project/dashvector/)

## How to use
```python
from langchain.vectorstores.dashvector import DashVector

dashvector = DashVector.from_documents(docs, embeddings)
```

---------

Co-authored-by: smallrain.xuxy <smallrain.xuxy@alibaba-inc.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-15 16:19:30 -07:00
Bagatur
bfbb97b74c
Bagatur/deeplake docs fixes (#9275)
Co-authored-by: adilkhan <adilkhan.sarsen@nu.edu.kz>
2023-08-15 15:56:36 -07:00
Hech
4b505060bd
fix: max_marginal_relevance_search and docs in Dingo (#9244) 2023-08-15 01:06:06 -07:00
Joseph McElroy
eac4ddb4bb
Elasticsearch Store Improvements (#8636)
Todo:
- [x] Connection options (cloud, localhost url, es_connection) support
- [x] Logging support
- [x] Customisable field support
- [x] Distance Similarity support 
- [x] Metadata support
  - [x] Metadata Filter support 
- [x] Retrieval Strategies
  - [x] Approx
  - [x] Approx with Hybrid
  - [x] Exact
  - [x] Custom 
  - [x] ELSER (excluding hybrid as we are working on RRF support)
- [x] integration tests 
- [x] Documentation

👋 this is a contribution to improve Elasticsearch integration with
Langchain. Its based loosely on the changes that are in master but with
some notable changes:

## Package name & design improvements
The import name is now `ElasticsearchStore`, to aid discoverability of
the VectorStore.

```py
## Before
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch, ElasticKnnSearch

## Now
from langchain.vectorstores.elasticsearch import ElasticsearchStore
```

## Retrieval Strategy support
Before we had a number of classes, depending on the strategy you wanted.
`ElasticKnnSearch` for approx, `ElasticVectorSearch` for exact / brute
force.

With `ElasticsearchStore` we have retrieval strategies:

### Approx Example
Default strategy for the vast majority of developers who use
Elasticsearch will be inferring the embeddings from outside of
Elasticsearch. Uses KNN functionality of _search.

```py
        texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index"
        )
        output = docsearch.similarity_search("foo", k=1)
```

### Approx, with hybrid
Developers who want to search, using both the embedding and the text
bm25 match. Its simple to enable.

```py
 texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ApproxRetrievalStrategy(hybrid=True)
        )
        output = docsearch.similarity_search("foo", k=1)
```

### Approx, with `query_model_id`
Developers who want to infer within Elasticsearch, using the model
loaded in the ml node.

This relies on the developer to setup the pipeline and index if they
wish to embed the text in Elasticsearch. Example of this in the test.

```py
 texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ApproxRetrievalStrategy(
                query_model_id="sentence-transformers__all-minilm-l6-v2"
            ),
        )
        output = docsearch.similarity_search("foo", k=1)
```

### I want to provide my own custom Elasticsearch Query
You might want to have more control over the query, to perform
multi-phase retrieval such as LTR, linearly boosting on document
parameters like recently updated or geo-distance. You can do this with
`custom_query_fn`

```py
        def my_custom_query(query_body: dict, query: str) -> dict:
            return {"query": {"match": {"text": {"query": "bar"}}}}

        texts = ["foo", "bar", "baz"]
        docsearch = ElasticsearchStore.from_texts(
            texts, FakeEmbeddings(), **elasticsearch_connection, index_name=index_name
        )
        docsearch.similarity_search("foo", k=1, custom_query=my_custom_query)

```

### Exact Example
Developers who have a small dataset in Elasticsearch, dont want the cost
of indexing the dims vs tradeoff on cost at query time. Uses
script_score.

```py
        texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.ExactRetrievalStrategy(),
        )
        output = docsearch.similarity_search("foo", k=1)
```

### ELSER Example
Elastic provides its own sparse vector model called ELSER. With these
changes, its really easy to use. The vector store creates a pipeline and
index thats setup for ELSER. All the developer needs to do is configure,
ingest and query via langchain tooling.

```py
texts = ["foo", "bar", "baz"]
       docsearch = ElasticsearchStore.from_texts(
            texts,
            FakeEmbeddings(),
            es_url="http://localhost:9200",
            index_name="sample-index",
            strategy=ElasticsearchStore.SparseVectorStrategy(),
        )
        output = docsearch.similarity_search("foo", k=1)

```

## Architecture
In future, we can introduce new strategies and allow us to not break bwc
as we evolve the index / query strategy.

## Credit
On release, could you credit @elastic and @phoey1 please? Thank you!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-14 23:42:35 -07:00
Bagatur
8cb2594562
Bagatur/dingo (#9079)
Co-authored-by: gary <1625721671@qq.com>
2023-08-11 10:54:45 -07:00
Josh Phillips
5fc07fa524
change id column type to uuid to match function (#7456)
The table creation process in these examples commands do not match what
the recently updated functions in these example commands is looking for.
This change updates the type in the table creation command.
Issue Number for my report of the doc problem #7446
@rlancemartin and @eyurtsev I believe this is your area
Twitter: @j1philli

Co-authored-by: Bagatur <baskaryan@gmail.com>
2023-08-10 16:57:19 -07:00
Bidhan Roy
02430e25b6
BagelDB (bageldb.ai), VectorStore integration. (#8971)
- **Description**: [BagelDB](bageldb.ai) a collaborative vector
database. Integrated the bageldb PyPi package with langchain with
related tests and code.

  - **Issue**: Not applicable.
  - **Dependencies**: `betabageldb` PyPi package.
  - **Tag maintainer**: @rlancemartin, @eyurtsev, @baskaryan
  - **Twitter handle**: bageldb_ai (https://twitter.com/BagelDB_ai)
  
We ran `make format`, `make lint` and `make test` locally.

Followed the contribution guideline thoroughly
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md

---------

Co-authored-by: Towhid1 <nurulaktertowhid@gmail.com>
2023-08-10 16:48:36 -07:00
Molly Cantillon
99b5a7226c
Weaviate: adding auth example + fixing spelling in ReadME (#8939)
Added basic auth example to Weaviate notebook @baskaryan
2023-08-08 16:24:17 -07:00