docs: `providers` update 2 (#18407)

Formatted pages into a consistent form. Added descriptions and links
when needed.
pull/18949/head
Leonid Ganeline 7 months ago committed by GitHub
parent 239f0a615e
commit fad308a764
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,49 +1,71 @@
# Astra DB
> DataStax [Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless vector-capable database built on Apache Cassandra® and made conveniently available
> [DataStax Astra DB](https://docs.datastax.com/en/astra/home/astra.html) is a serverless
> vector-capable database built on `Apache Cassandra®`and made conveniently available
> through an easy-to-use JSON API.
### Setup
See a [tutorial provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/tutorials/chatbot.html).
## Installation and Setup
Install the following Python package:
```bash
pip install "langchain-astradb>=0.0.1"
```
Some old integrations require the `astrapy` package:
```bash
pip install "astrapy>=0.7.1"
```
Get the [connection secrets](https://docs.datastax.com/en/astra/astra-db-vector/get-started/quickstart.html).
Set up the following environment variables:
```python
ASTRA_DB_APPLICATION_TOKEN="TOKEN"
ASTRA_DB_API_ENDPOINT="API_ENDPOINT"
```
## Vector Store
```python
from langchain_astradb import AstraDBVectorStore
vector_store = AstraDBVectorStore(
embedding=my_embedding,
collection_name="my_store",
api_endpoint="...",
token="...",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
```
Learn more in the [example notebook](/docs/integrations/vectorstores/astradb).
See the [example provided by DataStax](https://docs.datastax.com/en/astra/astra-db-vector/integrations/langchain.html).
## Chat message history
```python
from langchain_astradb import AstraDBChatMessageHistory
message_history = AstraDBChatMessageHistory(
session_id="test-session",
api_endpoint="...",
token="...",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
```
See the [usage example](/docs/integrations/memory/astradb_chat_message_history#example).
## LLM Cache
```python
from langchain.globals import set_llm_cache
from langchain_community.cache import AstraDBCache
set_llm_cache(AstraDBCache(
api_endpoint="...",
token="...",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
))
```
@ -54,11 +76,12 @@ Learn more in the [example notebook](/docs/integrations/llms/llm_caching#astra-d
```python
from langchain.globals import set_llm_cache
from langchain_community.cache import AstraDBSemanticCache
from langchain_community.cache import
set_llm_cache(AstraDBSemanticCache(
embedding=my_embedding,
api_endpoint="...",
token="...",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
))
```
@ -70,10 +93,11 @@ Learn more in the [example notebook](/docs/integrations/memory/astradb_chat_mess
```python
from langchain_community.document_loaders import AstraDBLoader
loader = AstraDBLoader(
collection_name="my_collection",
api_endpoint="...",
token="..."
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
```
@ -88,8 +112,8 @@ from langchain.retrievers.self_query.base import SelfQueryRetriever
vector_store = AstraDBVectorStore(
embedding=my_embedding,
collection_name="my_store",
api_endpoint="...",
token="...",
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
retriever = SelfQueryRetriever.from_llm(
@ -105,11 +129,12 @@ Learn more in the [example notebook](/docs/integrations/retrievers/self_query/as
## Store
```python
from langchain_astradb import AstraDBStore
from langchain_community.storage import AstraDBStore
store = AstraDBStore(
collection_name="my_kv_store",
api_endpoint="...",
token="..."
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
```
@ -118,11 +143,12 @@ Learn more in the [example notebook](/docs/integrations/stores/astradb#astradbst
## Byte Store
```python
from langchain_astradb import AstraDBByteStore
from langchain_community.storage import AstraDBByteStore
store = AstraDBByteStore(
collection_name="my_kv_store",
api_endpoint="...",
token="..."
api_endpoint=ASTRA_DB_API_ENDPOINT,
token=ASTRA_DB_APPLICATION_TOKEN,
)
```

@ -9,8 +9,7 @@ pip install awadb
```
## Vector Store
## Vector store
```python
from langchain_community.vectorstores import AwaDB
@ -19,7 +18,7 @@ from langchain_community.vectorstores import AwaDB
See a [usage example](/docs/integrations/vectorstores/awadb).
## Text Embedding Model
## Embedding models
```python
from langchain_community.embeddings import AwaEmbeddings

@ -1,16 +1,33 @@
# Baichuan
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
>[Baichuan Inc.](https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI,
> dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness.
## Visit Us
Visit us at https://www.baichuan-ai.com/.
Register and get an API key if you are trying out our APIs.
## Baichuan LLM Endpoint
An example is available at [example](/docs/integrations/llms/baichuan)
## Installation and Setup
## Baichuan Chat Model
An example is available at [example](/docs/integrations/chat/baichuan).
Register and get an API key [here](https://platform.baichuan-ai.com/).
## Baichuan Text Embedding Model
An example is available at [example](/docs/integrations/text_embedding/baichuan)
## LLMs
See a [usage example](/docs/integrations/llms/baichuan).
```python
from langchain_community.llms import BaichuanLLM
```
## Chat models
See a [usage example](/docs/integrations/chat/baichuan).
```python
from langchain_community.chat_models import ChatBaichuan
```
## Embedding models
See a [usage example](/docs/integrations/text_embedding/baichuan).
```python
from langchain_community.embeddings import BaichuanTextEmbeddings
```

@ -1,18 +1,20 @@
# Banana
Banana provided serverless GPU inference for AI models, including a CI/CD build pipeline and a simple Python framework (Potassium) to server your models.
>[Banana](https://www.banana.dev/) provided serverless GPU inference for AI models,
> a CI/CD build pipeline and a simple Python framework (`Potassium`) to server your models.
This page covers how to use the [Banana](https://www.banana.dev) ecosystem within LangChain.
It is broken into two parts:
* installation and setup,
* and then references to specific Banana wrappers.
## Installation and Setup
- Install with `pip install banana-dev`
- Install the python package `banana-dev`:
```bash
pip install banana-dev
```
- Get an Banana api key from the [Banana.dev dashboard](https://app.banana.dev) and set it as an environment variable (`BANANA_API_KEY`)
- Get your model's key and url slug from the model's details page
- Get your model's key and url slug from the model's details page.
## Define your Banana Template
@ -24,7 +26,7 @@ Other starter repos are available [here](https://github.com/orgs/bananaml/reposi
## Build the Banana app
To use Banana apps within Langchain, they must include the `outputs` key
To use Banana apps within Langchain, you must include the `outputs` key
in the returned json, and the value must be a string.
```python
@ -55,18 +57,12 @@ def handler(context: dict, request: Request) -> Response:
This example is from the `app.py` file in [CodeLlama-7B-Instruct-GPTQ](https://github.com/bananaml/demo-codellama-7b-instruct-gptq).
## Wrappers
### LLM
## LLM
Within Langchain, there exists a Banana LLM wrapper, which you can access with
```python
from langchain_community.llms import Banana
```
You need to provide a model key and model url slug, which you can get from the model's details page in the [Banana.dev dashboard](https://app.banana.dev).
```python
llm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")
```
See a [usage example](/docs/integrations/llms/banana).

@ -1,18 +1,23 @@
# Baseten
[Baseten](https://baseten.co) provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.
>[Baseten](https://baseten.co) is a provider of all the infrastructure you need to deploy and serve
> ML models performantly, scalably, and cost-efficiently.
As a model inference platform, Baseten is a `Provider` in the LangChain ecosystem. The Baseten integration currently implements a single `Component`, LLMs, but more are planned!
>As a model inference platform, `Baseten` is a `Provider` in the LangChain ecosystem.
The `Baseten` integration currently implements a single `Component`, LLMs, but more are planned!
Baseten lets you run both open source models like Llama 2 or Mistral and run proprietary or fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
>`Baseten` lets you run both open source models like Llama 2 or Mistral and run proprietary or
fine-tuned models on dedicated GPUs. If you're used to a provider like OpenAI, using Baseten has a few differences:
* Rather than paying per token, you pay per minute of GPU used.
* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.
* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with Truss.
>* Rather than paying per token, you pay per minute of GPU used.
>* Every model on Baseten uses [Truss](https://truss.baseten.co/welcome), our open-source model packaging framework, for maximum customizability.
>* While we have some [OpenAI ChatCompletions-compatible models](https://docs.baseten.co/api-reference/openai), you can define your own I/O spec with `Truss`.
You can learn more about Baseten in [our docs](https://docs.baseten.co/) or read on for LangChain-specific info.
>[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
## Setup: LangChain + Baseten
>Learn more about Baseten in [the Baseten docs](https://docs.baseten.co/).
## Installation and Setup
You'll need two things to use Baseten models with LangChain:
@ -25,47 +30,10 @@ Export your API key to your as an environment variable called `BASETEN_API_KEY`.
export BASETEN_API_KEY="paste_your_api_key_here"
```
## Component guide: LLMs
Baseten integrates with LangChain through the [LLM component](https://python.langchain.com/docs/integrations/llms/baseten), which provides a standardized and interoperable interface for models that are deployed on your Baseten workspace.
You can deploy foundation models like Mistral and Llama 2 with one click from the [Baseten model library](https://app.baseten.co/explore/) or if you have your own model, [deploy it with Truss](https://truss.baseten.co/welcome).
In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard.
To use this module, you must:
* Export your Baseten API key as the environment variable BASETEN_API_KEY
* Get the model ID for your model from your Baseten dashboard
* Identify the model deployment ("production" for all model library models)
## LLMs
[Learn more](https://docs.baseten.co/deploy/lifecycle) about model IDs and deployments.
Production deployment (standard for model library models)
See a [usage example](/docs/integrations/llms/baseten).
```python
from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="production")
mistral("What is the Mistral wind?")
```
Development deployment
```python
from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="development")
mistral("What is the Mistral wind?")
```
Other published deployment
```python
from langchain_community.llms import Baseten
mistral = Baseten(model="MODEL_ID", deployment="DEPLOYMENT_ID")
mistral("What is the Mistral wind?")
```
Streaming LLM output, chat completions, embeddings models, and more are all supported on the Baseten platform and coming soon to our LangChain integration. Contact us at [support@baseten.co](mailto:support@baseten.co) with any questions about using Baseten with LangChain.

@ -1,7 +1,8 @@
# Beam
This page covers how to use Beam within LangChain.
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
>[Beam](https://www.beam.cloud/) is a cloud computing platform that allows you to run your code
> on remote servers with GPUs.
## Installation and Setup
@ -9,84 +10,19 @@ It is broken into two parts: installation and setup, and then references to spec
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
- Register API keys with `beam configure`
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
- Install the Beam SDK `pip install beam-sdk`
## Wrappers
### LLM
There exists a Beam LLM wrapper, which you can access with
```python
from langchain_community.llms.beam import Beam
```
## Define your Beam app.
This is the environment youll be developing against once you start the app.
It's also used to define the maximum response length from the model.
```python
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
```
## Deploy your Beam app
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
- Install the Beam SDK:
```python
llm._deploy()
```bash
pip install beam-sdk
```
## Call your Beam app
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
This returns the GPT2 text response to your prompt.
## LLMs
```python
response = llm._call("Running machine learning on a remote GPU")
```
See a [usage example](/docs/integrations/llms/beam).
An example script which deploys the model and calls it would be:
See another example in the [Beam documentation](https://docs.beam.cloud/examples/langchain).
```python
from langchain_community.llms.beam import Beam
import time
llm = Beam(model_name="gpt2",
name="langchain-gpt2-test",
cpu=8,
memory="32Gi",
gpu="A10G",
python_version="python3.8",
python_packages=[
"diffusers[torch]>=0.10",
"transformers",
"torch",
"pillow",
"accelerate",
"safetensors",
"xformers",],
max_length="50",
verbose=False)
llm._deploy()
response = llm._call("Running machine learning on a remote GPU")
print(response)
```
```

@ -1,37 +1,20 @@
# NIBittensor
# Bittensor
This page covers how to use the BittensorLLM inference runtime within LangChain.
It is broken into two parts: installation and setup, and then examples of NIBittensorLLM usage.
>[Neural Internet Bittensor](https://neuralinternet.ai/) network, an open source protocol
> that powers a decentralized, blockchain-based, machine learning network.
## Installation and Setup
- Install the Python package with `pip install langchain`
Get your API_KEY from [Neural Internet](https://api.neuralinternet.ai).
## Wrappers
You can [analyze API_KEYS](https://api.neuralinternet.ai/api-keys)
and [logs of your usage](https://api.neuralinternet.ai/logs).
### LLM
There exists a NIBittensor LLM wrapper, which you can access with:
## LLMs
```python
from langchain_community.llms import NIBittensorLLM
```
It provides a unified interface for all models:
```python
llm = NIBittensorLLM(system_prompt="Your task is to provide concise and accurate response based on user prompt")
print(llm('Write a fibonacci function in python with golder ratio'))
```
Multiple responses from top miners can be accessible using the `top_responses` parameter:
See a [usage example](/docs/integrations/llms/bittensor).
```python
multi_response_llm = NIBittensorLLM(top_responses=10)
multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")
json_multi_resp = json.loads(multi_resp)
print(json_multi_resp)
from langchain_community.llms import NIBittensorLLM
```

@ -1,24 +1,16 @@
# BREEBS (Open Knowledge)
# Breebs (Open Knowledge)
[BREEBS](https://www.breebs.com/) is an open collaborative knowledge platform.
Anybody can create a Breeb, a knowledge capsule based on PDFs stored on a Google Drive folder.
A breeb can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.
Behind the scenes, Breebs implements several Retrieval Augmented Generation (RAG) models to seamlessly provide useful context at each iteration.
>[Breebs](https://www.breebs.com/) is an open collaborative knowledge platform.
>Anybody can create a `Breeb`, a knowledge capsule based on PDFs stored on a Google Drive folder.
>A `Breeb` can be used by any LLM/chatbot to improve its expertise, reduce hallucinations and give access to sources.
>Behind the scenes, `Breebs` implements several `Retrieval Augmented Generation (RAG)` models
> to seamlessly provide useful context at each iteration.
## List of available Breebs
To get the full list of Breebs, including their key (breeb_key) and description :
https://breebs.promptbreeders.com/web/listbreebs.
Dozens of Breebs have already been created by the community and are freely available for use. They cover a wide range of expertise, from organic chemistry to mythology, as well as tips on seduction and decentralized finance.
## Creating a new Breeb
To generate a new Breeb, simply compile PDF files in a publicly shared Google Drive folder and initiate the creation process on the [BREEBS website](https://www.breebs.com/) by clicking the "Create Breeb" button. You can currently include up to 120 files, with a total character limit of 15 million.
## Retriever
```python
from langchain.retrievers import BreebsRetriever
```
# Example
[See usage example (Retrieval & ConversationalRetrievalChain)](https://python.langchain.com/docs/integrations/retrievers/breebs)
[See a usage example (Retrieval & ConversationalRetrievalChain)](/docs/integrations/retrievers/breebs)

@ -7,7 +7,7 @@ The integrations outlined in this page can be used with `Cassandra` as well as o
i.e. those using the `Cassandra Query Language` protocol.
### Setup
## Installation and Setup
Install the following Python package:
@ -15,15 +15,10 @@ Install the following Python package:
pip install "cassio>=0.1.4"
```
## Vector Store
```python
from langchain_community.vectorstores import Cassandra
vector_store = Cassandra(
embedding=my_embedding,
table_name="my_store",
)
```
Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra).
@ -32,7 +27,6 @@ Learn more in the [example notebook](/docs/integrations/vectorstores/cassandra).
```python
from langchain_community.chat_message_histories import CassandraChatMessageHistory
message_history = CassandraChatMessageHistory(session_id="my-session")
```
Learn more in the [example notebook](/docs/integrations/memory/cassandra_chat_message_history).
@ -66,12 +60,11 @@ Learn more in the [example notebook](/docs/integrations/llms/llm_caching#cassand
```python
from langchain_community.document_loaders import CassandraLoader
loader = CassandraLoader(table="my_table")
docs = loader.load()
```
Learn more in the [example notebook](/docs/integrations/document_loaders/cassandra).
#### Attribution statement
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.
> Apache Cassandra, Cassandra and Apache are either registered trademarks or trademarks of
> the [Apache Software Foundation](http://www.apache.org/) in the United States and/or other countries.

@ -1,17 +1,26 @@
# CerebriumAI
This page covers how to use the CerebriumAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific CerebriumAI wrappers.
>[Cerebrium](https://docs.cerebrium.ai/cerebrium/getting-started/introduction) is a serverless GPU infrastructure provider.
> It provides API access to several LLM models.
See the examples in the [CerebriumAI documentation](https://docs.cerebrium.ai/examples/langchain).
## Installation and Setup
- Install with `pip install cerebrium`
- Get an CerebriumAI api key and set it as an environment variable (`CEREBRIUMAI_API_KEY`)
## Wrappers
- Install a python package:
```bash
pip install cerebrium
```
- [Get an CerebriumAI api key](https://docs.cerebrium.ai/cerebrium/getting-started/installation) and set
it as an environment variable (`CEREBRIUMAI_API_KEY`)
## LLMs
See a [usage example](/docs/integrations/llms/cerebriumai).
### LLM
There exists an CerebriumAI LLM wrapper, which you can access with
```python
from langchain_community.llms import CerebriumAI
```
Loading…
Cancel
Save