docs: providers update 2 (#18407)

Formatted pages into a consistent form. Added descriptions and links
when needed.
This commit is contained in:
Leonid Ganeline 2024-03-11 18:35:37 -07:00 committed by GitHub
parent 239f0a615e
commit fad308a764
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
10 changed files with 151 additions and 232 deletions

View File

@ -1,49 +1,71 @@
# Astra DB # Astra DB
> DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Apache Cassandra® and made conveniently available > [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
> vector-capable database built on `Apache Cassandra®`and made conveniently available
> through an easy-to-use JSON API. > through an easy-to-use JSON API.
### Setup See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/tutorials/chatbot.html).
## Installation and Setup
Install the following Python package: Install the following Python package:
```bash
pip install "langchain-astradb>=0.0.1"
```
Some old integrations require the `astrapy` package:
```bash ```bash
pip install "astrapy>=0.7.1" pip install "astrapy>=0.7.1"
``` ```
Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html).
Set up the following environment variables:
```python
ASTRA_DB_APPLICATION_TOKEN="TOKEN"
ASTRA_DB_API_ENDPOINT="API_ENDPOINT"
```
## Vector Store ## Vector Store
```python ```python
from langchain_astradb import AstraDBVectorStore from langchain_astradb import AstraDBVectorStore
vector_store = AstraDBVectorStore( vector_store = AstraDBVectorStore(
embedding=my_embedding, embedding=my_embedding,
collection_name="my_store", collection_name="my_store",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="...", token=ASTRA_DB_APPLICATION_TOKEN,
) )
``` ```
Learn more in the [example notebook](/docs/integrations/vectorstores/astradb). Learn more in the [example notebook](/docs/integrations/vectorstores/astradb).
See the [example provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/integrations/langchain.html).
## Chat message history ## Chat message history
```python ```python
from langchain_astradb import AstraDBChatMessageHistory from langchain_astradb import AstraDBChatMessageHistory
message_history = AstraDBChatMessageHistory( message_history = AstraDBChatMessageHistory(
session_id="test-session", session_id="test-session",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="...", token=ASTRA_DB_APPLICATION_TOKEN,
) )
``` ```
See the [usage example](/docs/integrations/memory/astradb_chat_message_history#example).
## LLM Cache ## LLM Cache
```python ```python
from langchain.globals import set_llm_cache from langchain.globals import set_llm_cache
from langchain_community.cache import AstraDBCache from langchain_community.cache import AstraDBCache
set_llm_cache(AstraDBCache( set_llm_cache(AstraDBCache(
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="...", token=ASTRA_DB_APPLICATION_TOKEN,
)) ))
``` ```
@ -54,11 +76,12 @@ Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-d
```python ```python
from langchain.globals import set_llm_cache from langchain.globals import set_llm_cache
from langchain_community.cache import AstraDBSemanticCache from langchain_community.cache import
set_llm_cache(AstraDBSemanticCache( set_llm_cache(AstraDBSemanticCache(
embedding=my_embedding, embedding=my_embedding,
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="...", token=ASTRA_DB_APPLICATION_TOKEN,
)) ))
``` ```
@ -70,10 +93,11 @@ Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_mess
```python ```python
from langchain_community.document_loaders import AstraDBLoader from langchain_community.document_loaders import AstraDBLoader
loader = AstraDBLoader( loader = AstraDBLoader(
collection_name="my_collection", collection_name="my_collection",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="..." token=ASTRA_DB_APPLICATION_TOKEN,
) )
``` ```
@ -88,8 +112,8 @@ from langchain.retrievers.self_query.base import SelfQueryRetriever
vector_store = AstraDBVectorStore( vector_store = AstraDBVectorStore(
embedding=my_embedding, embedding=my_embedding,
collection_name="my_store", collection_name="my_store",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="...", token=ASTRA_DB_APPLICATION_TOKEN,
) )
retriever = SelfQueryRetriever.from_llm( retriever = SelfQueryRetriever.from_llm(
@ -105,11 +129,12 @@ Learn more in the [example notebook](/docs/integrations/retrievers/self_query/as
## Store ## Store
```python ```python
from langchain_astradb import AstraDBStore from langchain_community.storage import AstraDBStore
store = AstraDBStore( store = AstraDBStore(
collection_name="my_kv_store", collection_name="my_kv_store",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="..." token=ASTRA_DB_APPLICATION_TOKEN,
) )
``` ```
@ -118,11 +143,12 @@ Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbst
## Byte Store ## Byte Store
```python ```python
from langchain_astradb import AstraDBByteStore from langchain_community.storage import AstraDBByteStore
store = AstraDBByteStore( store = AstraDBByteStore(
collection_name="my_kv_store", collection_name="my_kv_store",
api_endpoint="...", api_endpoint=ASTRA_DB_API_ENDPOINT,
token="..." token=ASTRA_DB_APPLICATION_TOKEN,
) )
``` ```

View File

@ -9,8 +9,7 @@ pip install awadb
``` ```
## Vector Store ## Vector store
```python ```python
from langchain_community.vectorstores import AwaDB from langchain_community.vectorstores import AwaDB
@ -19,7 +18,7 @@ from langchain_community.vectorstores import AwaDB
See a [usage example](/docs/integrations/vectorstores/awadb). See a [usage example](/docs/integrations/vectorstores/awadb).
## Text Embedding Model ## Embedding models
```python ```python
from langchain_community.embeddings import AwaEmbeddings from langchain_community.embeddings import AwaEmbeddings

View File

@ -1,16 +1,33 @@
# Baichuan # Baichuan
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness. >[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI,
> dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
## Visit Us
Visit us at https://www.baichuan-ai.com/.
Register and get an API key if you are trying out our APIs.
## Baichuan LLM Endpoint ## Installation and Setup
An example is available at [example](/docs/integrations/llms/baichuan)
## Baichuan Chat Model Register and get an API key [here](https://platform.baichuan-ai.com/).
An example is available at [example](/docs/integrations/chat/baichuan).
## Baichuan Text Embedding Model ## LLMs
An example is available at [example](/docs/integrations/text_embedding/baichuan)
See a [usage example](/docs/integrations/llms/baichuan).
```python
from langchain_community.llms import BaichuanLLM
```
## Chat models
See a [usage example](/docs/integrations/chat/baichuan).
```python
from langchain_community.chat_models import ChatBaichuan
```
## Embedding models
See a [usage example](/docs/integrations/text_embedding/baichuan).
```python
from langchain_community.embeddings import BaichuanTextEmbeddings
```

View File

@ -1,18 +1,20 @@
# Banana # Banana
Banana provided serverless GPU inference for AI models, including a CI/CD build pipeline and a simple Python framework (Potassium) to server your models. >[Banana](https://www.banana.dev/) provided serverless GPU inference for AI models,
> a CI/CD build pipeline and a simple Python framework (`Potassium`) to server your models.
This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain. This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain.
It is broken into two parts:
* installation and setup,
* and then references to specific Banana wrappers.
## Installation and Setup ## Installation and Setup
- Install with `pip install banana-dev` - Install the python package `banana-dev`:
```bash
pip install banana-dev
```
- Get an Banana api key from the [Banana.dev dashboard](https://app.banana.dev) and set it as an environment variable (`BANANA_API_KEY`) - Get an Banana api key from the [Banana.dev dashboard](https://app.banana.dev) and set it as an environment variable (`BANANA_API_KEY`)
- Get your model's key and url slug from the model's details page - Get your model's key and url slug from the model's details page.
## Define your Banana Template ## Define your Banana Template
@ -24,7 +26,7 @@ Other starter repos are available [here](https://github.com/orgs/bananaml/reposi
## Build the Banana app ## Build the Banana app
To use Banana apps within Langchain, they must include the `outputs` key To use Banana apps within Langchain, you must include the `outputs` key
in the returned json, and the value must be a string. in the returned json, and the value must be a string.
```python ```python
@ -55,18 +57,12 @@ def handler(context: dict, request: Request) -> Response:
This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq). This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq).
## Wrappers
### LLM ## LLM
Within Langchain, there exists a Banana LLM wrapper, which you can access with
```python ```python
from langchain_community.llms import Banana from langchain_community.llms import Banana
``` ```
You need to provide a model key and model url slug, which you can get from the model's details page in the [Banana.dev dashboard](https://app.banana.dev). See a [usage example](/docs/integrations/llms/banana).
```python
llm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")
```

View File

@ -1,18 +1,23 @@
# Baseten # Baseten
[Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently. >[Baseten](https://baseten.co) is a provider of all the infrastructure you need to deploy and serve
> ML models performantly, scalably, and cost-efficiently.
As a model inference platform, Baseten is a `Provider` in the LangChain ecosystem. The Baseten integration currently implements a single `Component`, LLMs, but more are planned! >As a model inference platform, `Baseten` is a `Provider` in the LangChain ecosystem.
The `Baseten` integration currently implements a single `Component`, LLMs, but more are planned!
Baseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences: >`Baseten` lets you run both open source models like Llama 2 or Mistral and run proprietary or
fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
* Rather than paying per token, you pay per minute of GPU used. >* Rather than paying per token, you pay per minute of GPU used.
* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability. >* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.
* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with Truss. >* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with `Truss`.
You can learn more about Baseten in [our docs](https://docs.baseten.co/) or read on for LangChain-specific info. >[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
## Setup: LangChain + Baseten >Learn more about Baseten in [the Baseten docs](https://docs.baseten.co/).
## Installation and Setup
You'll need two things to use Baseten models with LangChain: You'll need two things to use Baseten models with LangChain:
@ -25,47 +30,10 @@ Export your API key to your as an environment variable called `BASETEN_API_KEY`.
export BASETEN_API_KEY="paste_your_api_key_here" export BASETEN_API_KEY="paste_your_api_key_here"
``` ```
## Component guide: LLMs ## LLMs
Baseten integrates with LangChain through the [LLM component](https://python.langchain.com/docs/integrations/llms/baseten), which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace. See a [usage example](/docs/integrations/llms/baseten).
You can deploy foundation models like Mistral and Llama 2 with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with Truss](https://truss.baseten.co/welcome).
In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard.
To use this module, you must:
* Export your Baseten API key as the environment variable BASETEN_API_KEY
* Get the model ID for your model from your Baseten dashboard
* Identify the model deployment ("production" for all model library models)
[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
Production deployment (standard for model library models)
```python ```python
from langchain_community.llms import Baseten from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="production")
mistral("What is the Mistral wind?")
``` ```
Development deployment
```python
from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="development")
mistral("What is the Mistral wind?")
```
Other published deployment
```python
from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="DEPLOYMENT_ID")
mistral("What is the Mistral wind?")
```
Streaming LLM output, chat completions, embeddings models, and more are all supported on the Baseten platform and coming soon to our LangChain integration. Contact us at [support@baseten.co](mailto:support@baseten.co) with any questions about using Baseten with LangChain.

View File

@ -1,7 +1,8 @@
# Beam # Beam
This page covers how to use Beam within LangChain. >[Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code
It is broken into two parts: installation and setup, and then references to specific Beam wrappers. > on remote servers with GPUs.
## Installation and Setup ## Installation and Setup
@ -9,84 +10,19 @@ It is broken into two parts: installation and setup, and then references to spec
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh` - Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
- Register API keys with `beam configure` - Register API keys with `beam configure`
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`) - Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
- Install the Beam SDK `pip install beam-sdk` - Install the Beam SDK:
## Wrappers ```bash
pip install beam-sdk
```
### LLM
There exists a Beam LLM wrapper, which you can access with ## LLMs
See a [usage example](/docs/integrations/llms/beam).
See another example in the [Beam documentation](https://docs.beam.cloud/examples/langchain).
```python ```python
from langchain_community.llms.beam import Beam from langchain_community.llms.beam import Beam
``` ```
## Define your Beam app.
This is the environment youll be developing against once you start the app.
It's also used to define the maximum response length from the model.
```python
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
```
## Deploy your Beam app
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
```python
llm._deploy()
```
## Call your Beam app
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
This returns the GPT2 text response to your prompt.
```python
response = llm._call("Running machine learning on a remote GPU")
```
An example script which deploys the model and calls it would be:
```python
from langchain_community.llms.beam import Beam
import time
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
```

View File

@ -1,37 +1,20 @@
# NIBittensor # Bittensor
This page covers how to use the BittensorLLM inference runtime within LangChain. >[Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol
It is broken into two parts: installation and setup, and then examples of NIBittensorLLM usage. > that powers a decentralized, blockchain-based, machine learning network.
## Installation and Setup ## Installation and Setup
- Install the Python package with `pip install langchain` Get your API_KEY from [Neural Internet](https://api.neuralinternet.ai).
## Wrappers You can [analyze API_KEYS](https://api.neuralinternet.ai/api-keys)
and [logs of your usage](https://api.neuralinternet.ai/logs).
### LLM
There exists a NIBittensor LLM wrapper, which you can access with: ## LLMs
See a [usage example](/docs/integrations/llms/bittensor).
```python ```python
from langchain_community.llms import NIBittensorLLM from langchain_community.llms import NIBittensorLLM
``` ```
It provides a unified interface for all models:
```python
llm = NIBittensorLLM(system_prompt="Your task is to provide concise and accurate response based on user prompt")
print(llm('Write a fibonacci function in python with golder ratio'))
```
Multiple responses from top miners can be accessible using the `top_responses` parameter:
```python
multi_response_llm = NIBittensorLLM(top_responses=10)
multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")
json_multi_resp = json.loads(multi_resp)
print(json_multi_resp)
```

View File

@ -1,24 +1,16 @@
# BREEBS (Open Knowledge) # Breebs (Open Knowledge)
[BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform. >[Breebs](https://www.breebs.com/) is an open collaborative knowledge platform.
Anybody can create a Breeb, a knowledge capsule based on PDFs stored on a Google Drive folder. >Anybody can create a `Breeb`, a knowledge capsule based on PDFs stored on a Google Drive folder.
A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources. >A `Breeb` can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.
Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration. >Behind the scenes, `Breebs` implements several `Retrieval Augmented Generation (RAG)` models
> to seamlessly provide useful context at each iteration.
## List of available Breebs
To get the full list of Breebs, including their key (breeb_key) and description :
https://breebs.promptbreeders.com/web/listbreebs.
Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance.
## Creating a new Breeb
To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the "Create Breeb" button. You can currently include up to 120 files, with a total character limit of 15 million.
## Retriever ## Retriever
```python ```python
from langchain.retrievers import BreebsRetriever from langchain.retrievers import BreebsRetriever
``` ```
# Example [See a usage example (Retrieval & ConversationalRetrievalChain)](/docs/integrations/retrievers/breebs)
[See usage example (Retrieval & ConversationalRetrievalChain)](https://python.langchain.com/docs/integrations/retrievers/breebs)

View File

@ -7,7 +7,7 @@ The integrations outlined in this page can be used with `Cassandra` as well as o
i.e. those using the `Cassandra Query Language` protocol. i.e. those using the `Cassandra Query Language` protocol.
### Setup ## Installation and Setup
Install the following Python package: Install the following Python package:
@ -15,15 +15,10 @@ Install the following Python package:
pip install "cassio>=0.1.4" pip install "cassio>=0.1.4"
``` ```
## Vector Store ## Vector Store
```python ```python
from langchain_community.vectorstores import Cassandra from langchain_community.vectorstores import Cassandra
vector_store = Cassandra(
embedding=my_embedding,
table_name="my_store",
)
``` ```
Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra). Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra).
@ -32,7 +27,6 @@ Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra).
```python ```python
from langchain_community.chat_message_histories import CassandraChatMessageHistory from langchain_community.chat_message_histories import CassandraChatMessageHistory
message_history = CassandraChatMessageHistory(session_id="my-session")
``` ```
Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history). Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history).
@ -66,12 +60,11 @@ Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassand
```python ```python
from langchain_community.document_loaders import CassandraLoader from langchain_community.document_loaders import CassandraLoader
loader = CassandraLoader(table="my_table")
docs = loader.load()
``` ```
Learn more in the [example notebook](/docs/integrations/document_loaders/cassandra). Learn more in the [example notebook](/docs/integrations/document_loaders/cassandra).
#### Attribution statement #### Attribution statement
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries. > Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.

View File

@ -1,17 +1,26 @@
# CerebriumAI # CerebriumAI
This page covers how to use the CerebriumAI ecosystem within LangChain. >[Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers. > It provides API access to several LLM models.
See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/examples/langchain).
## Installation and Setup ## Installation and Setup
- Install with `pip install cerebrium`
- Get an CerebriumAI api key and set it as an environment variable (`CEREBRIUMAI_API_KEY`)
## Wrappers - Install a python package:
```bash
pip install cerebrium
```
- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set
it as an environment variable (`CEREBRIUMAI_API_KEY`)
## LLMs
See a [usage example](/docs/integrations/llms/cerebriumai).
### LLM
There exists an CerebriumAI LLM wrapper, which you can access with
```python ```python
from langchain_community.llms import CerebriumAI from langchain_community.llms import CerebriumAI
``` ```