docs `providers` update (#18336)

Formatted pages into a consistent form. Added descriptions and links
when needed.
pull/18338/head
Leonid Ganeline 4 months ago committed by GitHub
parent 68be5a7658
commit d43fa2eab1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,35 +1,38 @@
# Activeloop Deep Lake
This page covers how to use the Deep Lake ecosystem within LangChain.
>[Activeloop Deep Lake](https://docs.activeloop.ai/) is a data lake for Deep Learning applications, allowing you to use it
> as a vector store.
## Why Deep Lake?
- More than just a (multi-modal) vector store. You can later use the dataset to fine-tune your own LLM models.
- Not only stores embeddings, but also the original data with automatic version control.
- Truly serverless. Doesn't require another service and can be used with major cloud providers (AWS S3, GCS, etc.)
- Truly serverless. Doesn't require another service and can be used with major cloud providers (`AWS S3`, `GCS`, etc.)
Activeloop Deep Lake supports SelfQuery Retrieval:
`Activeloop Deep Lake` supports `SelfQuery Retrieval`:
[Activeloop Deep Lake Self Query Retrieval](/docs/integrations/retrievers/self_query/activeloop_deeplake_self_query)
## More Resources
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
2. [Twitter the-algorithm codebase analysis with Deep Lake](https://github.com/langchain-ai/langchain/blob/master/cookbook/twitter-the-algorithm-analysis-deeplake.ipynb)
3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)
## Installation and Setup
- Install the Python package with `pip install deeplake`
## Wrappers
Install the Python package:
```bash
pip install deeplake
```
### VectorStore
There exists a wrapper around Deep Lake, a data lake for Deep Learning applications, allowing you to use it as a vector store (for now), whether for semantic search or example selection.
## VectorStore
To import this vectorstore:
```python
from langchain_community.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](/docs/integrations/vectorstores/activeloop_deeplake)
See a [usage example](/docs/integrations/vectorstores/activeloop_deeplake).

@ -1,16 +1,42 @@
# AI21 Labs
This page covers how to use the AI21 ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific AI21 wrappers.
>[AI21 Labs](https://www.ai21.com/about) is a company specializing in Natural
> Language Processing (NLP), which develops AI systems
> that can understand and generate natural language.
This page covers how to use the `AI21` ecosystem within `LangChain`.
## Installation and Setup
- Get an AI21 api key and set it as an environment variable (`AI21_API_KEY`)
- Install the Python package:
```bash
pip install langchain-ai21
```
## Wrappers
## LLMs
### LLM
See a [usage example](/docs/integrations/llms/ai21).
There exists an AI21 LLM wrapper, which you can access with
```python
from langchain_community.llms import AI21
```
## Chat models
See a [usage example](/docs/integrations/chat/ai21).
```python
from langchain_ai21 import ChatAI21
```
## Embedding models
See a [usage example](/docs/integrations/text_embedding/ai21).
```python
from langchain_ai21 import AI21Embeddings
```

@ -1,15 +1,31 @@
# AnalyticDB
>[AnalyticDB for PostgreSQL](https://www.alibabacloud.com/help/en/analyticdb-for-postgresql/latest/product-introduction-overview)
> is a massively parallel processing (MPP) data warehousing service
> from [Alibaba Cloud](https://www.alibabacloud.com/)
>that is designed to analyze large volumes of data online.
>`AnalyticDB for PostgreSQL` is developed based on the open-source `Greenplum Database`
> project and is enhanced with in-depth extensions by `Alibaba Cloud`. AnalyticDB
> for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and
> Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and
> column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a
> high performance level and supports highly concurrent.
This page covers how to use the AnalyticDB ecosystem within LangChain.
### VectorStore
## Installation and Setup
There exists a wrapper around AnalyticDB, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
You need to install the `sqlalchemy` python package.
```bash
pip install sqlalchemy
```
## VectorStore
See a [usage example](/docs/integrations/vectorstores/analyticdb).
To import this vectorstore:
```python
from langchain_community.vectorstores import AnalyticDB
```
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb)

@ -1,8 +1,11 @@
# Annoy
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`) is a C++ library with Python bindings to search for points in space that are close to a given query point. It also creates large read-only file-based data structures that are mmapped into memory so that many processes may share the same data.
## Installation and Setup
> [Annoy](https://github.com/spotify/annoy) (`Approximate Nearest Neighbors Oh Yeah`)
> is a C++ library with Python bindings to search for points in space that are
> close to a given query point. It also creates large read-only file-based data
> structures that are mapped into memory so that many processes may share the same data.
## Installation and Setup
```bash
pip install annoy

@ -3,11 +3,12 @@
>[Apache Doris](https://doris.apache.org/) is a modern data warehouse for real-time analytics.
It delivers lightning-fast analytics on real-time data at scale.
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
>Usually `Apache Doris` is categorized into OLAP, and it has showed excellent performance
> in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/).
> Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
## Installation and Setup
```bash
pip install pymysql
```

@ -1,16 +1,13 @@
# Apify
This page covers how to use [Apify](https://apify.com) within LangChain.
## Overview
Apify is a cloud platform for web scraping and data extraction,
which provides an [ecosystem](https://apify.com/store) of more than a thousand
ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
>[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,
>which provides an [ecosystem](https://apify.com/store) of more than a thousand
>ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
[![Apify Actors](/img/ApifyActors.png)](https://apify.com/store)
This integration enables you run Actors on the Apify platform and load their results into LangChain to feed your vector
This integration enables you run Actors on the `Apify` platform and load their results into LangChain to feed your vector
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
blogs, or knowledge bases.
@ -22,9 +19,7 @@ blogs, or knowledge bases.
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
## Wrappers
### Utility
## Utility
You can use the `ApifyWrapper` to run Actors on the Apify platform.
@ -35,7 +30,7 @@ from langchain_community.utilities import ApifyWrapper
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify).
### Loader
## Document loader
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.

@ -1,17 +1,19 @@
# ArangoDB
>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud anywhere.
>[ArangoDB](https://github.com/arangodb/arangodb) is a scalable graph database system to
> drive value from connected data, faster. Native graphs, an integrated search engine, and JSON support, via a single query language. ArangoDB runs on-prem, in the cloud anywhere.
## Dependencies
## Installation and Setup
Install the [ArangoDB Python Driver](https://github.com/ArangoDB-Community/python-arango) package with
```bash
pip install python-arango
```
## Graph QA Chain
Connect your ArangoDB Database with a chat model to get insights on your data.
Connect your `ArangoDB` Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa).

@ -11,45 +11,54 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"[Arthur](https://arthur.ai) is a model monitoring and observability platform.\n",
">[Arthur](https://arthur.ai) is a model monitoring and observability platform.\n",
"\n",
"The following guide shows how to run a registered chat LLM with the Arthur callback handler to automatically log model inferences to Arthur.\n",
"\n",
"If you do not have a model currently onboarded to Arthur, visit our [onboarding guide for generative text models](https://docs.arthur.ai/user-guide/walkthroughs/model-onboarding/generative_text_onboarding.html). For more information about how to use the Arthur SDK, visit our [docs](https://docs.arthur.ai/)."
"If you do not have a model currently onboarded to Arthur, visit our [onboarding guide for generative text models](https://docs.arthur.ai/user-guide/walkthroughs/model-onboarding/generative_text_onboarding.html). For more information about how to use the `Arthur SDK`, visit our [docs](https://docs.arthur.ai/)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation and Setup\n",
"\n",
"Place Arthur credentials here"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"metadata": {
"id": "y8ku6X96sebl"
"id": "Me3prhqjsoqz"
},
"outputs": [],
"source": [
"from langchain.callbacks import ArthurCallbackHandler\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI"
"arthur_url = \"https://app.arthur.ai\"\n",
"arthur_login = \"your-arthur-login-username-here\"\n",
"arthur_model_id = \"your-arthur-model-id-here\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Place Arthur credentials here"
"## Callback handler"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {
"id": "Me3prhqjsoqz"
"id": "y8ku6X96sebl"
},
"outputs": [],
"source": [
"arthur_url = \"https://app.arthur.ai\"\n",
"arthur_login = \"your-arthur-login-username-here\"\n",
"arthur_model_id = \"your-arthur-model-id-here\""
"from langchain.callbacks import ArthurCallbackHandler\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI"
]
},
{
@ -191,9 +200,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

Loading…
Cancel
Save