diff --git a/docs/ecosystem/zilliz.md b/docs/ecosystem/zilliz.md new file mode 100644 index 00000000..342100cb --- /dev/null +++ b/docs/ecosystem/zilliz.md @@ -0,0 +1,21 @@ +# Zilliz + +This page covers how to use the Zilliz Cloud ecosystem within LangChain. +Zilliz uses the Milvus integration. +It is broken into two parts: installation and setup, and then references to specific Milvus wrappers. + +## Installation and Setup +- Install the Python SDK with `pip install pymilvus` +## Wrappers + +### VectorStore + +There exists a wrapper around Zilliz indexes, allowing you to use it as a vectorstore, +whether for semantic search or example selection. + +To import this vectorstore: +```python +from langchain.vectorstores import Milvus +``` + +For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](../modules/indexes/vectorstores/examples/zilliz.ipynb) diff --git a/docs/reference/integrations.md b/docs/reference/integrations.md index 1b87a34e..e487ae49 100644 --- a/docs/reference/integrations.md +++ b/docs/reference/integrations.md @@ -55,6 +55,12 @@ The following use cases require specific installs and api keys: - _LlamaCpp_: - Install requirements with `pip install llama-cpp-python` - Download model and convert following [llama.cpp instructions](https://github.com/ggerganov/llama.cpp) +- _Milvus_: + - Install requirements with `pip install pymilvus` + - In order to setup a local cluster, take a look [here](https://milvus.io/docs). +- _Zilliz_: + - Install requirements with `pip install pymilvus` + - To get up and running, take a look [here](https://zilliz.com/doc/quick_start). If you are using the `NLTKTextSplitter` or the `SpacyTextSplitter`, you will also need to install the appropriate models. For example, if you want to use the `SpacyTextSplitter`, you will need to install the `en_core_web_sm` model with `python -m spacy download en_core_web_sm`. Similarly, if you want to use the `NLTKTextSplitter`, you will need to install the `punkt` model with `python -m nltk.downloader punkt`.