docs `integrations/providers` update 10 (#19970)

Fixed broken links. Formatted to get consistent forms. Added missed
imports in the example code
pull/19941/head^2
Leonid Ganeline 5 months ago committed by GitHub
parent 82f0198be2
commit 4c969286fe
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -10,7 +10,9 @@
> Alibaba's own e-commerce ecosystem.
## Chat Model
## Chat Models
### Alibaba Cloud PAI EAS
See [installation instructions and a usage example](/docs/integrations/chat/alibaba_cloud_pai_eas).
@ -18,7 +20,9 @@ See [installation instructions and a usage example](/docs/integrations/chat/alib
from langchain_community.chat_models import PaiEasChatEndpoint
```
## Vectorstore
## Vector stores
### Alibaba Cloud OpenSearch
See [installation instructions and a usage example](/docs/integrations/vectorstores/alibabacloud_opensearch).
@ -26,7 +30,17 @@ See [installation instructions and a usage example](/docs/integrations/vectorsto
from langchain_community.vectorstores import AlibabaCloudOpenSearch, AlibabaCloudOpenSearchSettings
```
## Document Loader
### Alibaba Cloud Tair
See [installation instructions and a usage example](/docs/integrations/vectorstores/tair).
```python
from langchain_community.vectorstores import Tair
```
## Document Loaders
### Alibaba Cloud MaxCompute
See [installation instructions and a usage example](/docs/integrations/document_loaders/alibaba_cloud_maxcompute).

@ -1,22 +1,23 @@
# Tair
This page covers how to use the Tair ecosystem within LangChain.
>[Alibaba Cloud Tair](https://www.alibabacloud.com/help/en/tair/latest/what-is-tair) is a cloud native in-memory database service
> developed by `Alibaba Cloud`. It provides rich data models and enterprise-grade capabilities to
> support your real-time online scenarios while maintaining full compatibility with open-source `Redis`.
> `Tair` also introduces persistent memory-optimized instances that are based on
> new non-volatile memory (NVM) storage medium.
## Installation and Setup
Install Tair Python SDK with `pip install tair`.
Install Tair Python SDK:
## Wrappers
### VectorStore
There exists a wrapper around TairVector, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
```bash
pip install tair
```
To import this vectorstore:
## Vector Store
```python
from langchain_community.vectorstores import Tair
```
For a more detailed walkthrough of the Tair wrapper, see [this notebook](/docs/integrations/vectorstores/tair)
See a [usage example](/docs/integrations/vectorstores/tair).

@ -1,81 +1,38 @@
# TiDB
> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution, that provides dedicated and serverless options. TiDB Serverless is now integrating a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly develop AI applications using TiDB Serverless without the need for a new database or additional technical stacks. Be among the first to experience it by joining the waitlist for the private beta at https://tidb.cloud/ai.
> [TiDB Cloud](https://tidbcloud.com/), is a comprehensive Database-as-a-Service (DBaaS) solution,
> that provides dedicated and serverless options. `TiDB Serverless` is now integrating
> a built-in vector search into the MySQL landscape. With this enhancement, you can seamlessly
> develop AI applications using `TiDB Serverless` without the need for a new database or additional
> technical stacks. Be among the first to experience it by joining the [waitlist for the private beta](https://tidb.cloud/ai).
As part of our ongoing efforts to empower TiDB users in leveraging AI application development, we provide support for
- Memory, enabling the storage of chat history messages directly within TiDB;
- TiDB Loader streamlining the process of loading data from TiDB using Langchain;
- TiDB Vector Store, enabling the use of TiDB Cloud as a vector store, capitalizing on TiDB's robust database infrastructure.
## Installation and Setup
You have to get the connection details for the TiDB database.
Visit the [TiDB Cloud](https://tidbcloud.com/) to get the connection details.
## Memory
Utilize TiDB Cloud to store chat message history, leveraging the unlimited scalability of TiDB Cloud Serverless. This enables the storage of massive amounts of historical data without the need to maintain message retention windows.
```python
from langchain_community.chat_message_histories import TiDBChatMessageHistory
from langchain_community.chat_message_histories import TiDBChatMessageHistory
history = TiDBChatMessageHistory(
connection_string=tidb_connection_string,
session_id="code_gen",
)
history.add_user_message("How's our feature going?")
history.add_ai_message(
"It's going well. We are working on testing now. It will be released in Feb."
)
```
Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history).
## TiDB Loader
Effortlessly load data from TiDB into other LangChain components using SQL. This simplifies the integration process, allowing for seamless data manipulation and utilization within your AI applications.
```bash
## Document loader
```python
from langchain_community.document_loaders import TiDBLoader
# Setup TiDBLoader to retrieve data
loader = TiDBLoader(
connection_string=tidb_connection_string,
query=f"SELECT * FROM {table_name};",
page_content_columns=["name", "description"],
metadata_columns=["id"],
)
# Load data
documents = loader.load()
```
Please refer the details [here](/docs/integrations/document_loaders/tidb).
## TiDB Vector Store
## Vector store
With TiDB's exceptional database capabilities, easily manage and store billions of vectorized data. This enhances the performance and scalability of AI applications, providing a robust foundation for your vector storage needs.
```
from typing import List, Tuple
from langchain.docstore.document import Document
```python
from langchain_community.vectorstores import TiDBVectorStore
from langchain_openai import OpenAIEmbeddings
```
Please refer the details [here](/docs/integrations/vectorstores/tidb_vector).
db = TiDBVectorStore.from_texts(
embedding=embeddings,
texts=['Andrew like eating oranges', 'Alexandra is from England', 'Ketanji Brown Jackson is a judge'],
table_name="tidb_vector_langchain",
connection_string=tidb_connection_url,
distance_strategy="cosine",
)
query = "Can you tell me about Alexandra?"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
## Memory
```python
from langchain_community.chat_message_histories import TiDBChatMessageHistory
```
Please refer the details [here](/docs/integrations/vectorstores/tidb_vector).
Please refer the details [here](/docs/integrations/memory/tidb_chat_message_history).

@ -1,32 +1,37 @@
# TigerGraph
This page covers how to use the TigerGraph ecosystem within LangChain.
What is TigerGraph?
What is `TigerGraph`?
**TigerGraph in a nutshell:**
- TigerGraph is a natively distributed and high-performance graph database.
- `TigerGraph` is a natively distributed and high-performance graph database.
- The storage of data in a graph format of vertices and edges leads to rich relationships, ideal for grouding LLM responses.
- Get started quickly with TigerGraph by visiting [their website](https://tigergraph.com/).
- Get started quickly with `TigerGraph` by visiting [their website](https://tigergraph.com/).
## Installation and Setup
- Install the Python SDK with `pip install pyTigerGraph`
Install the Python SDK:
```bash
pip install pyTigerGraph
```
## Wrappers
## Graph store
### TigerGraph Store
To utilize the TigerGraph InquiryAI functionality, you can import `TigerGraph` from `langchain_community.graphs`.
To utilize the `TigerGraph InquiryAI` functionality, you can import `TigerGraph` from `langchain_community.graphs`.
```python
import pyTigerGraph as tg
conn = tg.TigerGraphConnection(host="DATABASE_HOST_HERE", graphname="GRAPH_NAME_HERE", username="USERNAME_HERE", password="PASSWORD_HERE")
### ==== CONFIGURE INQUIRYAI HOST ====
conn.ai.configureInquiryAIHost("INQUIRYAI_HOST_HERE")
from langchain_community.graphs import TigerGraph
graph = TigerGraph(conn)
result = graph.query("How many servers are there?")
print(result)

@ -6,13 +6,19 @@
"source": [
"# Together AI\n",
"\n",
"> The Together API makes it easy to fine-tune or run leading open-source models with a couple lines of code. We have integrated the worlds leading open-source models, including Llama-2, RedPajama, Falcon, Alpaca, Stable Diffusion XL, and more. Read more: https://together.ai\n",
"> [Together AI](https://together.ai) is a cloud platform for building and running generative AI.\n",
"> \n",
"> It makes it easy to fine-tune or run leading open-source models with a couple lines of code.\n",
"> We have integrated the worlds leading open-source models, including `Llama-2`, `RedPajama`, `Falcon`, `Alpaca`, `Stable Diffusion XL`, and more. Read mo\n",
"\n",
"To use, you'll need an API key which you can find here:\n",
"https://api.together.xyz/settings/api-keys. This can be passed in as init param\n",
"## Installation and Setup\n",
"\n",
"To use, you'll need an API key which you can find [here](https://api.together.xyz/settings/api-keys).\n",
"\n",
"API key can be passed in as init param\n",
"``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.\n",
"\n",
"Together API reference: https://docs.together.ai/reference\n",
"See details in the [Together API reference](https://docs.together.ai/reference)\n",
"\n",
"You will also need to install the `langchain-together` integration package:"
]
@ -26,6 +32,15 @@
"%pip install --upgrade --quiet langchain-together"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## LLMs\n",
"\n",
"See a [usage example](/docs/integrations/llms/together)."
]
},
{
"cell_type": "code",
"execution_count": 2,
@ -34,20 +49,33 @@
},
"outputs": [],
"source": [
"from __module_name__ import (\n",
" Together, # LLM\n",
" TogetherEmbeddings,\n",
")"
"from langchain_together import Together"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"execution": {
"iopub.execute_input": "2024-04-03T18:49:24.701100Z",
"iopub.status.busy": "2024-04-03T18:49:24.700943Z",
"iopub.status.idle": "2024-04-03T18:49:24.705570Z",
"shell.execute_reply": "2024-04-03T18:49:24.704943Z",
"shell.execute_reply.started": "2024-04-03T18:49:24.701088Z"
}
},
"source": [
"See the docs for their\n",
"## Embedding models\n",
"\n",
"- [LLM](/docs/integrations/llms/together)\n",
"- [Embeddings Model](/docs/integrations/text_embedding/together)"
"See a [usage example](/docs/integrations/text_embedding/together)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_together.embeddings import TogetherEmbeddings"
]
}
],
@ -70,9 +98,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

@ -1,19 +1,33 @@
# TruLens
>[TruLens](https://trulens.org) is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
This page covers how to use [TruLens](https://trulens.org) to evaluate and track LLM apps built on langchain.
## What is TruLens?
TruLens is an [open-source](https://github.com/truera/trulens) package that provides instrumentation and evaluation tools for large language model (LLM) based applications.
## Installation and Setup
Install the `trulens-eval` python package.
```bash
pip install trulens-eval
```
## Quickstart
See the integration details in the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
### Tracking
## Quick start
Once you've created your LLM chain, you can use TruLens for evaluation and tracking.
TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/),
and is also an extensible framework for LLM evaluation.
Once you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of [out-of-the-box Feedback Functions](https://www.trulens.org/trulens_eval/evaluation/feedback_functions/), and is also an extensible framework for LLM evaluation.
Create the feedback functions:
```python
# create a feedback function
from trulens_eval.feedback import Feedback, Huggingface,
from trulens_eval.feedback import Feedback, Huggingface, OpenAI
# Initialize HuggingFace-based feedback function collection class:
hugs = Huggingface()
openai = OpenAI()
@ -29,12 +43,19 @@ qa_relevance = Feedback(openai.relevance).on_input_output()
# Toxicity of input
toxicity = Feedback(openai.toxicity).on_input()
```
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app.
### Chains
After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with
TruChain to get detailed tracing, logging and evaluation of your LLM app.
Note: See code for the `chain` creation is in
the [TruLens documentation](https://www.trulens.org/trulens_eval/getting_started/quickstarts/langchain_quickstart/).
```python
from trulens_eval import TruChain
# wrap your chain with TruChain
truchain = TruChain(
chain,
@ -45,11 +66,16 @@ truchain = TruChain(
truchain("que hora es?")
```
### Evaluation
Now you can explore your LLM-based application!
Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.
```python
from trulens_eval import Tru
tru = Tru()
tru.run_dashboard() # open a Streamlit app to explore
```

@ -26,7 +26,7 @@ See a [usage example](/docs/integrations/vectorstores/xata).
from langchain_community.vectorstores import XataVectorStore
```
### Memory
## Memory
See a [usage example](/docs/integrations/memory/xata_chat_message_history).

Loading…
Cancel
Save