mirror of https://github.com/hwchase17/langchain
You cannot select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
313 lines
11 KiB
Markdown
313 lines
11 KiB
Markdown
# AWS
|
|
|
|
The `LangChain` integrations related to [Amazon AWS](https://aws.amazon.com/) platform.
|
|
|
|
First-party AWS integrations are available in the `langchain_aws` package.
|
|
|
|
```bash
|
|
pip install langchain-aws
|
|
```
|
|
|
|
And there are also some community integrations available in the `langchain_community` package with the `boto3` optional dependency.
|
|
|
|
```bash
|
|
pip install langchain-community boto3
|
|
```
|
|
|
|
## Chat models
|
|
|
|
### Bedrock Chat
|
|
|
|
See a [usage example](/docs/integrations/chat/bedrock).
|
|
|
|
```python
|
|
from langchain_aws import ChatBedrock
|
|
```
|
|
|
|
## LLMs
|
|
|
|
### Bedrock
|
|
|
|
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of
|
|
> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`,
|
|
> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to
|
|
> build generative AI applications with security, privacy, and responsible AI. Using `Amazon Bedrock`,
|
|
> you can easily experiment with and evaluate top FMs for your use case, privately customize them with
|
|
> your data using techniques such as fine-tuning and `Retrieval Augmented Generation` (`RAG`), and build
|
|
> agents that execute tasks using your enterprise systems and data sources. Since `Amazon Bedrock` is
|
|
> serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy
|
|
> generative AI capabilities into your applications using the AWS services you are already familiar with.
|
|
|
|
|
|
See a [usage example](/docs/integrations/llms/bedrock).
|
|
|
|
```python
|
|
from langchain_aws import BedrockLLM
|
|
```
|
|
|
|
### Amazon API Gateway
|
|
|
|
>[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for
|
|
> developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door"
|
|
> for applications to access data, business logic, or functionality from your backend services. Using
|
|
> `API Gateway`, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication
|
|
> applications. `API Gateway` supports containerized and serverless workloads, as well as web applications.
|
|
>
|
|
> `API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of
|
|
> concurrent API calls, including traffic management, CORS support, authorization and access control,
|
|
> throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs.
|
|
> You pay for the API calls you receive and the amount of data transferred out and, with the `API Gateway`
|
|
> tiered pricing model, you can reduce your cost as your API usage scales.
|
|
|
|
See a [usage example](/docs/integrations/llms/amazon_api_gateway).
|
|
|
|
```python
|
|
from langchain_community.llms import AmazonAPIGateway
|
|
```
|
|
|
|
### SageMaker Endpoint
|
|
|
|
>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a system that can build, train, and deploy
|
|
> machine learning (ML) models with fully managed infrastructure, tools, and workflows.
|
|
|
|
We use `SageMaker` to host our model and expose it as the `SageMaker Endpoint`.
|
|
|
|
See a [usage example](/docs/integrations/llms/sagemaker).
|
|
|
|
```python
|
|
from langchain_aws import SagemakerEndpoint
|
|
```
|
|
|
|
## Embedding Models
|
|
|
|
### Bedrock
|
|
|
|
See a [usage example](/docs/integrations/text_embedding/bedrock).
|
|
```python
|
|
from langchain_community.embeddings import BedrockEmbeddings
|
|
```
|
|
|
|
### SageMaker Endpoint
|
|
|
|
See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint).
|
|
```python
|
|
from langchain_community.embeddings import SagemakerEndpointEmbeddings
|
|
from langchain_community.llms.sagemaker_endpoint import ContentHandlerBase
|
|
```
|
|
|
|
## Document loaders
|
|
|
|
### AWS S3 Directory and File
|
|
|
|
>[Amazon Simple Storage Service (Amazon S3)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
|
|
> is an object storage service.
|
|
>[AWS S3 Directory](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-folders.html)
|
|
>[AWS S3 Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingBucket.html)
|
|
|
|
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory).
|
|
|
|
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file).
|
|
|
|
```python
|
|
from langchain_community.document_loaders import S3DirectoryLoader, S3FileLoader
|
|
```
|
|
|
|
### Amazon Textract
|
|
|
|
>[Amazon Textract](https://docs.aws.amazon.com/managedservices/latest/userguide/textract.html) is a machine
|
|
> learning (ML) service that automatically extracts text, handwriting, and data from scanned documents.
|
|
|
|
See a [usage example](/docs/integrations/document_loaders/amazon_textract).
|
|
|
|
```python
|
|
from langchain_community.document_loaders import AmazonTextractPDFLoader
|
|
```
|
|
|
|
### Amazon Athena
|
|
|
|
>[Amazon Athena](https://aws.amazon.com/athena/) is a serverless, interactive analytics service built
|
|
>on open-source frameworks, supporting open-table and file formats.
|
|
|
|
See a [usage example](/docs/integrations/document_loaders/athena).
|
|
|
|
```python
|
|
from langchain_community.document_loaders.athena import AthenaLoader
|
|
```
|
|
|
|
## Vector stores
|
|
|
|
### Amazon OpenSearch Service
|
|
|
|
> [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) performs
|
|
> interactive log analytics, real-time application monitoring, website search, and more. `OpenSearch` is
|
|
> an open source,
|
|
> distributed search and analytics suite derived from `Elasticsearch`. `Amazon OpenSearch Service` offers the
|
|
> latest versions of `OpenSearch`, support for many versions of `Elasticsearch`, as well as
|
|
> visualization capabilities powered by `OpenSearch Dashboards` and `Kibana`.
|
|
|
|
We need to install several python libraries.
|
|
|
|
```bash
|
|
pip install boto3 requests requests-aws4auth
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/vectorstores/opensearch#using-aos-amazon-opensearch-service).
|
|
|
|
```python
|
|
from langchain_community.vectorstores import OpenSearchVectorSearch
|
|
```
|
|
|
|
### Amazon DocumentDB Vector Search
|
|
|
|
>[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
|
|
> With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
|
|
> Vector search for Amazon DocumentDB combines the flexibility and rich querying capability of a JSON-based document database with the power of vector search.
|
|
|
|
#### Installation and Setup
|
|
|
|
See [detail configuration instructions](/docs/integrations/vectorstores/documentdb).
|
|
|
|
We need to install the `pymongo` python package.
|
|
|
|
```bash
|
|
pip install pymongo
|
|
```
|
|
|
|
#### Deploy DocumentDB on AWS
|
|
|
|
[Amazon DocumentDB (with MongoDB Compatibility)](https://docs.aws.amazon.com/documentdb/) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud.
|
|
|
|
AWS offers services for computing, databases, storage, analytics, and other functionality. For an overview of all AWS services, see [Cloud Computing with Amazon Web Services](https://aws.amazon.com/what-is-aws/).
|
|
|
|
See a [usage example](/docs/integrations/vectorstores/documentdb).
|
|
|
|
```python
|
|
from langchain.vectorstores import DocumentDBVectorSearch
|
|
```
|
|
|
|
## Retrievers
|
|
|
|
### Amazon Kendra
|
|
|
|
> [Amazon Kendra](https://docs.aws.amazon.com/kendra/latest/dg/what-is-kendra.html) is an intelligent search service
|
|
> provided by `Amazon Web Services` (`AWS`). It utilizes advanced natural language processing (NLP) and machine
|
|
> learning algorithms to enable powerful search capabilities across various data sources within an organization.
|
|
> `Kendra` is designed to help users find the information they need quickly and accurately,
|
|
> improving productivity and decision-making.
|
|
|
|
> With `Kendra`, we can search across a wide range of content types, including documents, FAQs, knowledge bases,
|
|
> manuals, and websites. It supports multiple languages and can understand complex queries, synonyms, and
|
|
> contextual meanings to provide highly relevant search results.
|
|
|
|
We need to install the `langchain-aws` library.
|
|
|
|
```bash
|
|
pip install langchain-aws
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/retrievers/amazon_kendra_retriever).
|
|
|
|
```python
|
|
from langchain_aws import AmazonKendraRetriever
|
|
```
|
|
|
|
### Amazon Bedrock (Knowledge Bases)
|
|
|
|
> [Knowledge bases for Amazon Bedrock](https://aws.amazon.com/bedrock/knowledge-bases/) is an
|
|
> `Amazon Web Services` (`AWS`) offering which lets you quickly build RAG applications by using your
|
|
> private data to customize foundation model response.
|
|
|
|
We need to install the `langchain-aws` library.
|
|
|
|
```bash
|
|
pip install langchain-aws
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/retrievers/bedrock).
|
|
|
|
```python
|
|
from langchain_aws import AmazonKnowledgeBasesRetriever
|
|
```
|
|
|
|
## Tools
|
|
|
|
### AWS Lambda
|
|
|
|
>[`Amazon AWS Lambda`](https://aws.amazon.com/pm/lambda/) is a serverless computing service provided by
|
|
> `Amazon Web Services` (`AWS`). It helps developers to build and run applications and services without
|
|
> provisioning or managing servers. This serverless architecture enables you to focus on writing and
|
|
> deploying code, while AWS automatically takes care of scaling, patching, and managing the
|
|
> infrastructure required to run your applications.
|
|
|
|
We need to install `boto3` python library.
|
|
|
|
```bash
|
|
pip install boto3
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/tools/awslambda).
|
|
|
|
## Memory
|
|
|
|
### AWS DynamoDB
|
|
|
|
>[AWS DynamoDB](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/index.html)
|
|
> is a fully managed `NoSQL` database service that provides fast and predictable performance with seamless scalability.
|
|
|
|
We have to configure the [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html).
|
|
|
|
We need to install the `boto3` library.
|
|
|
|
```bash
|
|
pip install boto3
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/memory/aws_dynamodb).
|
|
|
|
```python
|
|
from langchain.memory import DynamoDBChatMessageHistory
|
|
```
|
|
|
|
## Callbacks
|
|
|
|
### SageMaker Tracking
|
|
|
|
>[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly
|
|
> and easily build, train and deploy machine learning (ML) models.
|
|
|
|
>[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability
|
|
> of `Amazon SageMaker` that lets you organize, track,
|
|
> compare and evaluate ML experiments and model versions.
|
|
|
|
We need to install several python libraries.
|
|
|
|
```bash
|
|
pip install google-search-results sagemaker
|
|
```
|
|
|
|
See a [usage example](/docs/integrations/callbacks/sagemaker_tracking).
|
|
|
|
```python
|
|
from langchain.callbacks import SageMakerCallbackHandler
|
|
```
|
|
|
|
## Chains
|
|
|
|
### Amazon Comprehend Moderation Chain
|
|
|
|
>[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that
|
|
> uses machine learning to uncover valuable insights and connections in text.
|
|
|
|
|
|
We need to install the `boto3` and `nltk` libraries.
|
|
|
|
```bash
|
|
pip install boto3 nltk
|
|
```
|
|
|
|
See a [usage example](/docs/guides/productionization/safety/amazon_comprehend_chain).
|
|
|
|
```python
|
|
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain
|
|
```
|