docs: platform pages update (#17836)

`Integrations` platform page ToC-s: sections there are placed without
order. For example, the
[google](https://python.langchain.com/docs/integrations/platforms/google)
page. The `LLM` section is not the first section, as it is in the
[Components](https://python.langchain.com/docs/integrations/components)
menu.
Updates:
* reorganized the page sections so they follow the Component menu order.
* fixed names for the section names: "Text Embedding Models" ->
"Embedding Models"
pull/15211/merge
Leonid Ganeline 4 months ago committed by GitHub
parent 07c518ad3e
commit 3dabd3f214
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -67,7 +67,7 @@ See a [usage example](/docs/integrations/chat/bedrock).
from langchain_community.chat_models import BedrockChat
```
## Text Embedding Models
## Embedding Models
### Bedrock
@ -84,26 +84,6 @@ from langchain_community.embeddings import SagemakerEndpointEmbeddings
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
```
## Chains
### Amazon Comprehend Moderation Chain
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
> uses machine learning to uncover valuable insights and connections in text.
We need to install the `boto3` and `nltk` libraries.
```bash
pip install boto3 nltk
```
See a [usage example](/docs/guides/safety/amazon_comprehend_chain).
```python
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
```
## Document loaders
### AWS S3 Directory and File
@ -132,25 +112,55 @@ See a [usage example](/docs/integrations/document_loaders/amazon_textract).
from langchain_community.document_loaders import AmazonTextractPDFLoader
```
## Memory
## Vector stores
### AWS DynamoDB
### Amazon OpenSearch Service
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
> an open source,
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
We need to install the `boto3` library.
We need to install several python libraries.
```bash
pip install boto3
pip install boto3 requests requests-aws4auth
```
See a [usage example](/docs/integrations/memory/aws_dynamodb).
See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
```python
from langchain.memory import DynamoDBChatMessageHistory
from langchain_community.vectorstores import OpenSearchVectorSearch
```
### Amazon DocumentDB Vector Search
>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
#### Installation and Setup
See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).
We need to install the `pymongo` python package.
```bash
pip install pymongo
```
#### Deploy DocumentDB on AWS
[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).
See a [usage example](/docs/integrations/vectorstores/documentdb).
```python
from langchain.vectorstores import DocumentDBVectorSearch
```
## Retrievers
@ -197,58 +207,6 @@ See a [usage example](/docs/integrations/retrievers/bedrock).
from langchain.retrievers import AmazonKnowledgeBasesRetriever
```
## Vector stores
### Amazon OpenSearch Service
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
> an open source,
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
We need to install several python libraries.
```bash
pip install boto3 requests requests-aws4auth
```
See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
```python
from langchain_community.vectorstores import OpenSearchVectorSearch
```
### Amazon DocumentDB Vector Search
>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
#### Installation and Setup
See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).
We need to install the `pymongo` python package.
```bash
pip install pymongo
```
#### Deploy DocumentDB on AWS
[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).
See a [usage example](/docs/integrations/vectorstores/documentdb).
```python
from langchain.vectorstores import DocumentDBVectorSearch
```
## Tools
### AWS Lambda
@ -267,6 +225,26 @@ pip install boto3
See a [usage example](/docs/integrations/tools/awslambda).
## Memory
### AWS DynamoDB
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
We need to install the `boto3` library.
```bash
pip install boto3
```
See a [usage example](/docs/integrations/memory/aws_dynamodb).
```python
from langchain.memory import DynamoDBChatMessageHistory
```
## Callbacks
@ -290,3 +268,23 @@ See a [usage example](/docs/integrations/callbacks/sagemaker_tracking).
```python
from langchain.callbacks import SageMakerCallbackHandler
```
## Chains
### Amazon Comprehend Moderation Chain
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
> uses machine learning to uncover valuable insights and connections in text.
We need to install the `boto3` and `nltk` libraries.
```bash
pip install boto3 nltk
```
See a [usage example](/docs/guides/safety/amazon_comprehend_chain).
```python
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
```

@ -20,25 +20,9 @@ See a [usage example](/docs/integrations/llms/google_ai).
from langchain_google_genai import GoogleGenerativeAI
```
### Vertex AI
### Vertex AI Model Garden
Access to `Gemini` and `PaLM` LLMs (like `text-bison` and `code-bison`) via `Vertex AI` on Google Cloud.
We need to install `langchain-google-vertexai` python package.
```bash
pip install langchain-google-vertexai
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm).
```python
from langchain_google_vertexai import VertexAI
```
### Model Garden
Access PaLM and hundreds of OSS models via `Vertex AI Model Garden` on Google Cloud.
Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service.
We need to install `langchain-google-vertexai` python package.
@ -52,6 +36,7 @@ See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model
from langchain_google_vertexai import VertexAIModelGarden
```
## Chat models
### Google Generative AI
@ -119,6 +104,40 @@ See a [usage example](/docs/integrations/chat/google_vertex_ai_palm).
from langchain_google_vertexai import ChatVertexAI
```
## Embedding models
### Google Generative AI Embeddings
See a [usage example](/docs/integrations/text_embedding/google_generative_ai).
```bash
pip install -U langchain-google-genai
```
Configure your API key.
```bash
export GOOGLE_API_KEY=your-api-key
```
```python
from langchain_google_genai import GoogleGenerativeAIEmbeddings
```
### Vertex AI
We need to install `langchain-google-vertexai` python package.
```bash
pip install langchain-google-vertexai
```
See a [usage example](/docs/integrations/text_embedding/google_vertex_ai_palm).
```python
from langchain_google_vertexai import VertexAIEmbeddings
```
## Document Loaders
### AlloyDB for PostgreSQL
@ -797,22 +816,6 @@ See [usage example](/docs/integrations/memory/google_cloud_sql_mssql).
from langchain_google_cloud_sql_mssql import MSSQLEngine, MSSQLChatMessageHistory
```
## El Carro for Oracle Workloads
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
offers a way to run Oracle databases in Kubernetes as a portable, open source,
community driven, no vendor lock-in container orchestration system.
```bash
pip install langchain-google-el-carro
```
See [usage example](/docs/integrations/memory/google_el_carro).
```python
from langchain_google_el_carro import ElCarroChatMessageHistory
```
### Spanner
> [Google Cloud Spanner](https://cloud.google.com/spanner/docs) is a fully managed, mission-critical, relational database service on Google Cloud that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: GoogleSQL (ANSI 2011 with extensions) and PostgreSQL.
@ -889,10 +892,10 @@ See [usage example](/docs/integrations/memory/google_datastore).
from langchain_google_datastore import DatastoreChatMessageHistory
```
## El Carro Oracle Operator
### El Carro: The Oracle Operator for Kubernetes
> Google [El Carro Oracle Operator](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
offers a way to run Oracle databases in Kubernetes as a portable, open source,
> Google [El Carro Oracle Operator for Kubernetes](https://github.com/GoogleCloudPlatform/elcarro-oracle-operator)
offers a way to run `Oracle` databases in `Kubernetes` as a portable, open source,
community driven, no vendor lock-in container orchestration system.
```bash

@ -2,6 +2,15 @@
All functionality related to `Microsoft Azure` and other `Microsoft` products.
## LLMs
### Azure OpenAI
See a [usage example](/docs/integrations/llms/azure_openai).
```python
from langchain_openai import AzureOpenAI
```
## Chat Models
### Azure OpenAI
@ -29,7 +38,7 @@ See a [usage example](/docs/integrations/chat/azure_chat_openai)
from langchain_openai import AzureChatOpenAI
```
## Text Embedding Models
## Embedding Models
### Azure OpenAI
See a [usage example](/docs/integrations/text_embedding/azureopenai)
@ -38,15 +47,6 @@ See a [usage example](/docs/integrations/text_embedding/azureopenai)
from langchain_openai import AzureOpenAIEmbeddings
```
## LLMs
### Azure OpenAI
See a [usage example](/docs/integrations/llms/azure_openai).
```python
from langchain_openai import AzureOpenAI
```
## Document loaders
### Azure AI Data
@ -209,7 +209,6 @@ See a [usage example](/docs/integrations/document_loaders/microsoft_onenote).
from langchain_community.document_loaders.onenote import OneNoteLoader
```
## Vector stores
### Azure Cosmos DB
@ -262,19 +261,6 @@ See a [usage example](/docs/integrations/retrievers/azure_cognitive_search).
from langchain.retrievers import AzureCognitiveSearchRetriever
```
## Utilities
### Bing Search API
>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
> is a web search engine owned and operated by `Microsoft`.
See a [usage example](/docs/integrations/tools/bing_search).
```python
from langchain_community.utilities import BingSearchAPIWrapper
```
## Toolkits
### Azure Cognitive Services
@ -320,6 +306,19 @@ from langchain_community.agent_toolkits import PowerBIToolkit
from langchain_community.utilities.powerbi import PowerBIDataset
```
## Utilities
### Bing Search API
>[Microsoft Bing](https://www.bing.com/), commonly referred to as `Bing` or `Bing Search`,
> is a web search engine owned and operated by `Microsoft`.
See a [usage example](/docs/integrations/tools/bing_search).
```python
from langchain_community.utilities import BingSearchAPIWrapper
```
## More
### Microsoft Presidio

@ -36,7 +36,6 @@ from langchain_openai import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai)
## Chat model
See a [usage example](/docs/integrations/chat/openai).
@ -51,8 +50,7 @@ from langchain_openai import AzureChatOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai)
## Text Embedding Model
## Embedding Model
See a [usage example](/docs/integrations/text_embedding/openai)
@ -60,19 +58,6 @@ See a [usage example](/docs/integrations/text_embedding/openai)
from langchain_openai import OpenAIEmbeddings
```
## Tokenizer
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
You can also use it to count tokens when splitting documents with
```python
from langchain_text_splitters import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
## Document Loader
See a [usage example](/docs/integrations/document_loaders/chatgpt_loader).
@ -89,12 +74,19 @@ See a [usage example](/docs/integrations/retrievers/chatgpt-plugin).
from langchain.retrievers import ChatGPTPluginRetriever
```
## Chain
## Tools
See a [usage example](/docs/guides/safety/moderation).
### Dall-E Image Generator
>[OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI`
> using deep learning methodologies to generate digital images from natural language descriptions,
> called "prompts".
See a [usage example](/docs/integrations/tools/dalle_image_generator).
```python
from langchain.chains import OpenAIModerationChain
from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
```
## Adapter
@ -105,17 +97,24 @@ See a [usage example](/docs/integrations/adapters/openai).
from langchain.adapters import openai as lc_openai
```
## Tools
## Tokenizer
### Dall-E Image Generator
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
for OpenAI LLMs.
>[OpenAI Dall-E](https://openai.com/dall-e-3) are text-to-image models developed by `OpenAI`
> using deep learning methodologies to generate digital images from natural language descriptions,
> called "prompts".
You can also use it to count tokens when splitting documents with
```python
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
## Chain
See a [usage example](/docs/integrations/tools/dalle_image_generator).
See a [usage example](/docs/guides/safety/moderation).
```python
from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
from langchain.chains import OpenAIModerationChain
```

Loading…
Cancel
Save