docs: integrations reference updates 10 (#25556)

Added missed provider pages. Added descriptions, links.
This commit is contained in:
Leonid Ganeline 2024-08-22 10:21:54 -07:00 committed by GitHub
parent 9447925d94
commit 624e0747b9
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 125 additions and 6 deletions

View File

@ -625,6 +625,7 @@ from langchain.retrievers import GoogleVertexAISearchRetriever
> from Google Cloud allows enterprises to search, store, govern, and manage documents and their AI-extracted
> data and metadata in a single platform.
Note: `GoogleDocumentAIWarehouseRetriever` is deprecated, use `DocumentAIWarehouseRetriever` (see below).
```python
from langchain.retrievers import GoogleDocumentAIWarehouseRetriever
docai_wh_retriever = GoogleDocumentAIWarehouseRetriever(
@ -636,6 +637,10 @@ documents = docai_wh_retriever.invoke(
)
```
```python
from langchain_google_community.documentai_warehouse import DocumentAIWarehouseRetriever
```
## Tools
### Text-to-Speech

View File

@ -466,6 +466,22 @@ See a [usage example](/docs/integrations/tools/playwright).
from langchain_community.agent_toolkits import PlayWrightBrowserToolkit
```
#### PlayWright Browser individual tools
You can use individual tools from the PlayWright Browser Toolkit.
```python
from langchain_community.tools.playwright import ClickTool
from langchain_community.tools.playwright import CurrentWebPageTool
from langchain_community.tools.playwright import ExtractHyperlinksTool
from langchain_community.tools.playwright import ExtractTextTool
from langchain_community.tools.playwright import GetElementsTool
from langchain_community.tools.playwright import NavigateTool
from langchain_community.tools.playwright import NavigateBackTool
```
```python
## Graphs
### Azure Cosmos DB for Apache Gremlin

View File

@ -0,0 +1,28 @@
# Connery
>[Connery SDK](https://github.com/connery-io/connery-sdk) is an NPM package that
> includes both an SDK and a CLI, designed for the development of plugins and actions.
>
>The CLI automates many things in the development process. The SDK
> offers a JavaScript API for defining plugins and actions and packaging them
> into a plugin server with a standardized REST API generated from the metadata.
> The plugin server handles authorization, input validation, and logging.
> So you can focus on the logic of your actions.
>
> See the use cases and examples in the [Connery SDK documentation](https://sdk.connery.io/docs/use-cases/)
## Toolkit
See [usage example](/docs/integrations/tools/connery).
```python
from langchain_community.agent_toolkits.connery import ConneryToolkit
```
## Tools
### ConneryAction
```python
from langchain_community.tools.connery import ConneryService
```

View File

@ -6,12 +6,27 @@ This document demonstrates to leverage DashVector within the LangChain ecosystem
It is broken into two parts: installation and setup, and then references to specific DashVector wrappers.
## Installation and Setup
Install the Python SDK:
```bash
pip install dashvector
```
## VectorStore
You must have an API key. Here are the [installation instructions](https://help.aliyun.com/document_detail/2510223.html).
## Embedding models
```python
from langchain_community.embeddings import DashScopeEmbeddings
```
See the [use example](/docs/integrations/vectorstores/dashvector).
## Vector Store
A DashVector Collection is wrapped as a familiar VectorStore for native usage within LangChain,
which allows it to be readily used for various scenarios, such as semantic search or example selection.

View File

@ -19,7 +19,7 @@ os.environ["DATAFORSEO_PASSWORD"] = "your_password"
## Utility
The DataForSEO utility wraps the API. To import this utility, use:
The `DataForSEO` utility wraps the API. To import this utility, use:
```python
from langchain_community.utilities.dataforseo_api_search import DataForSeoAPIWrapper
@ -36,6 +36,13 @@ from langchain.agents import load_tools
tools = load_tools(["dataforseo-api-search"])
```
This will load the following tools:
```python
from langchain_community.tools import DataForSeoAPISearchRun
from langchain_community.tools import DataForSeoAPISearchResults
```
## Example usage
```python

View File

@ -1,10 +1,21 @@
# DingoDB
This page covers how to use the DingoDB ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific DingoDB wrappers.
>[DingoDB](https://github.com/dingodb) is a distributed multi-modal vector
> database. It combines the features of a data lake and a vector database,
> allowing for the storage of any type of data (key-value, PDF, audio,
> video, etc.) regardless of its size. Utilizing DingoDB, you can construct
> your own Vector Ocean (the next-generation data architecture following data
> warehouse and data lake). This enables
> the analysis of both structured and unstructured data through
> a singular SQL with exceptionally low latency in real time.
## Installation and Setup
- Install the Python SDK with `pip install dingodb`
Install the Python SDK
```bash
pip install dingodb
```
## VectorStore
@ -12,6 +23,7 @@ There exists a wrapper around DingoDB indexes, allowing you to use it as a vecto
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain_community.vectorstores import Dingo
```

View File

@ -20,7 +20,7 @@ LangChain provides an access to the `In-memory` and `HNSW` vector stores from th
See a [usage example](/docs/integrations/vectorstores/docarray_hnsw).
```python
from langchain_community.vectorstores DocArrayHnswSearch
from langchain_community.vectorstores import DocArrayHnswSearch
```
See a [usage example](/docs/integrations/vectorstores/docarray_in_memory).
@ -28,3 +28,10 @@ See a [usage example](/docs/integrations/vectorstores/docarray_in_memory).
from langchain_community.vectorstores DocArrayInMemorySearch
```
## Retriever
See a [usage example](/docs/integrations/retrievers/docarray_retriever).
```python
from langchain_community.retrievers import DocArrayRetriever
```

View File

@ -0,0 +1,29 @@
# Pandas
>[pandas](https://pandas.pydata.org) is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool,
built on top of the `Python` programming language.
## Installation and Setup
Install the `pandas` package using `pip`:
```bash
pip install pandas
```
## Document loader
See a [usage example](/docs/integrations/document_loaders/pandas_dataframe).
```python
from langchain_community.document_loaders import DataFrameLoader
```
## Toolkit
See a [usage example](/docs/integrations/tools/pandas).
```python
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
```