docs integrations/providers update 10 (#19970)

Fixed broken links. Formatted to get consistent forms. Added missed
imports in the example code
This commit is contained in:
Leonid Ganeline 2024-04-04 14:22:45 -07:00 committed by GitHub
parent 82f0198be2
commit 4c969286fe
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
7 changed files with 137 additions and 106 deletions

View File

@ -10,7 +10,9 @@
> Alibaba's own e-commerce ecosystem. > Alibaba's own e-commerce ecosystem.
## Chat Model ## Chat Models
### Alibaba Cloud PAI EAS
See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas). See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas).
@ -18,7 +20,9 @@ See [installation instructions and a usage example](/docs/integrations/chat/alib
from langchain_community.chat_models import PaiEasChatEndpoint from langchain_community.chat_models import PaiEasChatEndpoint
``` ```
## Vectorstore ## Vector stores
### Alibaba Cloud OpenSearch
See [installation instructions and a usage example](/docs/integrations/vectorstores/alibabacloud_opensearch). See [installation instructions and a usage example](/docs/integrations/vectorstores/alibabacloud_opensearch).
@ -26,7 +30,17 @@ See [installation instructions and a usage example](/docs/integrations/vectorsto
from langchain_community.vectorstores import AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings from langchain_community.vectorstores import AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings
``` ```
## Document Loader ### Alibaba Cloud Tair
See [installation instructions and a usage example](/docs/integrations/vectorstores/tair).
```python
from langchain_community.vectorstores import Tair
```
## Document Loaders
### Alibaba Cloud MaxCompute
See [installation instructions and a usage example](/docs/integrations/document_loaders/alibaba_cloud_maxcompute). See [installation instructions and a usage example](/docs/integrations/document_loaders/alibaba_cloud_maxcompute).

View File

@ -1,22 +1,23 @@
# Tair # Tair
This page covers how to use the Tair ecosystem within LangChain. >[Alibaba Cloud Tair](https://www.alibabacloud.com/help/en/tair/latest/what-is-tair) is a cloud native in-memory database service
> developed by `Alibaba Cloud`. It provides rich data models and enterprise-grade capabilities to
> support your real-time online scenarios while maintaining full compatibility with open-source `Redis`.
> `Tair` also introduces persistent memory-optimized instances that are based on
> new non-volatile memory (NVM) storage medium.
## Installation and Setup ## Installation and Setup
Install Tair Python SDK with `pip install tair`. Install Tair Python SDK:
## Wrappers ```bash
pip install tair
```
### VectorStore ## Vector Store
There exists a wrapper around TairVector, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python ```python
from langchain_community.vectorstores import Tair from langchain_community.vectorstores import Tair
``` ```
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair) See a [usage example](/docs/integrations/vectorstores/tair).

View File

@ -1,81 +1,38 @@
# TiDB # TiDB
> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai. > [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution,
> that provides dedicated and serverless options. `TiDB Serverless` is now integrating
As part of our ongoing efforts to empower TiDB users in leveraging AI application development, we provide support for > a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly
> develop AI applications using `TiDB Serverless` without the need for a new database or additional
- Memory, enabling the storage of chat history messages directly within TiDB; > technical stacks. Be among the first to experience it by joining the [waitlist for the private beta](https://tidb.cloud/ai).
- TiDB Loader streamlining the process of loading data from TiDB using Langchain;
- TiDB Vector Store, enabling the use of TiDB Cloud as a vector store, capitalizing on TiDB's robust database infrastructure.
## Memory ## Installation and Setup
Utilize TiDB Cloud to store chat message history, leveraging the unlimited scalability of TiDB Cloud Serverless. This enables the storage of massive amounts of historical data without the need to maintain message retention windows. You have to get the connection details for the TiDB database.
Visit the [TiDB Cloud](https://tidbcloud.com/) to get the connection details.
```python ```bash
from langchain_community.chat_message_histories import TiDBChatMessageHistory ## Document loader
from langchain_community.chat_message_histories import TiDBChatMessageHistory
history = TiDBChatMessageHistory(
connection_string=tidb_connection_string,
session_id="code_gen",
)
history.add_user_message("How's our feature going?")
history.add_ai_message(
"It's going well. We are working on testing now. It will be released in Feb."
)
```
Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history).
## TiDB Loader
Effortlessly load data from TiDB into other LangChain components using SQL. This simplifies the integration process, allowing for seamless data manipulation and utilization within your AI applications.
```python ```python
from langchain_community.document_loaders import TiDBLoader from langchain_community.document_loaders import TiDBLoader
# Setup TiDBLoader to retrieve data
loader = TiDBLoader(
connection_string=tidb_connection_string,
query=f"SELECT * FROM {table_name};",
page_content_columns=["name", "description"],
metadata_columns=["id"],
)
# Load data
documents = loader.load()
``` ```
Please refer the details [here](/docs/integrations/document_loaders/tidb). Please refer the details [here](/docs/integrations/document_loaders/tidb).
## TiDB Vector Store ## Vector store
With TiDB's exceptional database capabilities, easily manage and store billions of vectorized data. This enhances the performance and scalability of AI applications, providing a robust foundation for your vector storage needs. ```python
```
from typing import List, Tuple
from langchain.docstore.document import Document
from langchain_community.vectorstores import TiDBVectorStore from langchain_community.vectorstores import TiDBVectorStore
from langchain_openai import OpenAIEmbeddings ```
Please refer the details [here](/docs/integrations/vectorstores/tidb_vector).
db = TiDBVectorStore.from_texts(
embedding=embeddings,
texts=['Andrew like eating oranges', 'Alexandra is from England', 'Ketanji Brown Jackson is a judge'],
table_name="tidb_vector_langchain",
connection_string=tidb_connection_url,
distance_strategy="cosine",
)
query = "Can you tell me about Alexandra?" ## Memory
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score: ```python
print("-" * 80) from langchain_community.chat_message_histories import TiDBChatMessageHistory
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
``` ```
Please refer the details [here](/docs/integrations/vectorstores/tidb_vector). Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history).

View File

@ -1,32 +1,37 @@
# TigerGraph # TigerGraph
This page covers how to use the TigerGraph ecosystem within LangChain. What is `TigerGraph`?
What is TigerGraph?
**TigerGraph in a nutshell:** **TigerGraph in a nutshell:**
- TigerGraph is a natively distributed and high-performance graph database. - `TigerGraph` is a natively distributed and high-performance graph database.
- The storage of data in a graph format of vertices and edges leads to rich relationships, ideal for grouding LLM responses. - The storage of data in a graph format of vertices and edges leads to rich relationships, ideal for grouding LLM responses.
- Get started quickly with TigerGraph by visiting [their website](https://tigergraph.com/). - Get started quickly with `TigerGraph` by visiting [their website](https://tigergraph.com/).
## Installation and Setup ## Installation and Setup
- Install the Python SDK with `pip install pyTigerGraph` Install the Python SDK:
## Wrappers ```bash
pip install pyTigerGraph
```
## Graph store
### TigerGraph Store ### TigerGraph Store
To utilize the TigerGraph InquiryAI functionality, you can import `TigerGraph` from `langchain_community.graphs`.
To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`.
```python ```python
import pyTigerGraph as tg import pyTigerGraph as tg
conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE") conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
### ==== CONFIGURE INQUIRYAI HOST ==== ### ==== CONFIGURE INQUIRYAI HOST ====
conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE") conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
from langchain_community.graphs import TigerGraph from langchain_community.graphs import TigerGraph
graph = TigerGraph(conn) graph = TigerGraph(conn)
result = graph.query("How many servers are there?") result = graph.query("How many servers are there?")
print(result) print(result)

View File

@ -6,13 +6,19 @@
"source": [ "source": [
"# Together AI\n", "# Together AI\n",
"\n", "\n",
"> The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the worlds leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more: https://together.ai\n", "> [Together AI](https://together.ai) is a cloud platform for building and running generative AI.\n",
"> \n",
"> It makes it easy to fine-tune or run leading open-source models with a couple lines of code.\n",
"> We have integrated the worlds leading open-source models, including `Llama-2`, `RedPajama`, `Falcon`, `Alpaca`, `Stable Diffusion XL`, and more. Read mo\n",
"\n", "\n",
"To use, you'll need an API key which you can find here:\n", "## Installation and Setup\n",
"https://api.together.xyz/settings/api-keys. This can be passed in as init param\n", "\n",
"To use, you'll need an API key which you can find [here](https://api.together.xyz/settings/api-keys).\n",
"\n",
"API key can be passed in as init param\n",
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n", "``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n",
"\n", "\n",
"Together API reference: https://docs.together.ai/reference\n", "See details in the [Together API reference](https://docs.together.ai/reference)\n",
"\n", "\n",
"You will also need to install the `langchain-together` integration package:" "You will also need to install the `langchain-together` integration package:"
] ]
@ -26,6 +32,15 @@
"%pip install --upgrade --quiet langchain-together" "%pip install --upgrade --quiet langchain-together"
] ]
}, },
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LLMs\n",
"\n",
"See a [usage example](/docs/integrations/llms/together)."
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2, "execution_count": 2,
@ -34,20 +49,33 @@
}, },
"outputs": [], "outputs": [],
"source": [ "source": [
"from __module_name__ import (\n", "from langchain_together import Together"
" Together, # LLM\n",
" TogetherEmbeddings,\n",
")"
] ]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {}, "metadata": {
"execution": {
"iopub.execute_input": "2024-04-03T18:49:24.701100Z",
"iopub.status.busy": "2024-04-03T18:49:24.700943Z",
"iopub.status.idle": "2024-04-03T18:49:24.705570Z",
"shell.execute_reply": "2024-04-03T18:49:24.704943Z",
"shell.execute_reply.started": "2024-04-03T18:49:24.701088Z"
}
},
"source": [ "source": [
"See the docs for their\n", "## Embedding models\n",
"\n", "\n",
"- [LLM](/docs/integrations/llms/together)\n", "See a [usage example](/docs/integrations/text_embedding/together)."
"- [Embeddings Model](/docs/integrations/text_embedding/together)" ]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_together.embeddings import TogetherEmbeddings"
] ]
} }
], ],
@ -70,9 +98,9 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.11" "version": "3.10.12"
} }
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 1 "nbformat_minor": 4
} }

View File

@ -1,19 +1,33 @@
# TruLens # TruLens
>[TruLens](https://trulens.org) is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain. This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain.
## What is TruLens?
TruLens is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications. ## Installation and Setup
## Quick start Install the `trulens-eval` python package.
Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/), and is also an extensible framework for LLM evaluation. ```bash
pip install trulens-eval
```
## Quickstart
See the integration details in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
### Tracking
Once you've created your LLM chain, you can use TruLens for evaluation and tracking.
TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/),
and is also an extensible framework for LLM evaluation.
Create the feedback functions:
```python ```python
# create a feedback function from trulens_eval.feedback import Feedback, Huggingface,
from trulens_eval.feedback import Feedback, Huggingface, OpenAI
# Initialize HuggingFace-based feedback function collection class: # Initialize HuggingFace-based feedback function collection class:
hugs = Huggingface() hugs = Huggingface()
openai = OpenAI() openai = OpenAI()
@ -29,12 +43,19 @@ qa_relevance = Feedback(openai.relevance).on_input_output()
# Toxicity of input # Toxicity of input
toxicity = Feedback(openai.toxicity).on_input() toxicity = Feedback(openai.toxicity).on_input()
``` ```
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app. ### Chains
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with
TruChain to get detailed tracing, logging and evaluation of your LLM app.
Note: See code for the `chain` creation is in
the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
```python ```python
from trulens_eval import TruChain
# wrap your chain with TruChain # wrap your chain with TruChain
truchain = TruChain( truchain = TruChain(
chain, chain,
@ -45,11 +66,16 @@ truchain = TruChain(
truchain("que hora es?") truchain("que hora es?")
``` ```
### Evaluation
Now you can explore your LLM-based application! Now you can explore your LLM-based application!
Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record. Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.
```python ```python
from trulens_eval import Tru
tru = Tru()
tru.run_dashboard() # open a Streamlit app to explore tru.run_dashboard() # open a Streamlit app to explore
``` ```

View File

@ -26,7 +26,7 @@ See a [usage example](/docs/integrations/vectorstores/xata).
from langchain_community.vectorstores import XataVectorStore from langchain_community.vectorstores import XataVectorStore
``` ```
### Memory ## Memory
See a [usage example](/docs/integrations/memory/xata_chat_message_history). See a [usage example](/docs/integrations/memory/xata_chat_message_history).