mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
docs: integrations
reference updates 15 (#25994)
Added missed provider pages and links. Fixed inconsistent formatting.
This commit is contained in:
parent
6e82d2184b
commit
5052e87d7c
@ -12,7 +12,7 @@ pip install langchain-huggingface
|
||||
|
||||
## Chat models
|
||||
|
||||
### Models from Hugging Face
|
||||
### ChatHuggingFace
|
||||
|
||||
We can use the `Hugging Face` LLM classes or directly use the `ChatHuggingFace` class.
|
||||
|
||||
@ -24,7 +24,16 @@ from langchain_huggingface import ChatHuggingFace
|
||||
|
||||
## LLMs
|
||||
|
||||
### Hugging Face Local Pipelines
|
||||
### HuggingFaceEndpoint
|
||||
|
||||
|
||||
See a [usage example](/docs/integrations/llms/huggingface_endpoint).
|
||||
|
||||
```python
|
||||
from langchain_huggingface import HuggingFaceEndpoint
|
||||
```
|
||||
|
||||
### HuggingFacePipeline
|
||||
|
||||
Hugging Face models can be run locally through the `HuggingFacePipeline` class.
|
||||
|
||||
@ -44,6 +53,22 @@ See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
from langchain_huggingface import HuggingFaceEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceEndpointEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
|
||||
```python
|
||||
from langchain_huggingface import HuggingFaceEndpointEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceInferenceAPIEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/huggingfacehub).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceInferenceAPIEmbeddings
|
||||
```
|
||||
|
||||
### HuggingFaceInstructEmbeddings
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/instruct_embeddings).
|
||||
@ -63,25 +88,6 @@ See a [usage example](/docs/integrations/text_embedding/bge_huggingface).
|
||||
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
|
||||
```
|
||||
|
||||
### Hugging Face Text Embeddings Inference (TEI)
|
||||
|
||||
>[Hugging Face Text Embeddings Inference (TEI)](https://huggingface.co/docs/text-generation-inference/index) is a toolkit for deploying and serving open-source
|
||||
> text embeddings and sequence classification models. `TEI` enables high-performance extraction for the most popular models,
|
||||
>including `FlagEmbedding`, `Ember`, `GTE` and `E5`.
|
||||
|
||||
We need to install `huggingface-hub` python package.
|
||||
|
||||
```bash
|
||||
pip install huggingface-hub
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/text_embeddings_inference).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
|
||||
|
||||
## Document Loaders
|
||||
|
||||
### Hugging Face dataset
|
||||
@ -104,7 +110,34 @@ See a [usage example](/docs/integrations/document_loaders/hugging_face_dataset).
|
||||
from langchain_community.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoader
|
||||
```
|
||||
|
||||
### Hugging Face model loader
|
||||
|
||||
>Load model information from `Hugging Face Hub`, including README content.
|
||||
>
|
||||
>This loader interfaces with the `Hugging Face Models API` to fetch
|
||||
> and load model metadata and README files.
|
||||
> The API allows you to search and filter models based on
|
||||
> specific criteria such as model tags, authors, and more.
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders import HuggingFaceModelLoader
|
||||
```
|
||||
|
||||
### Image captions
|
||||
|
||||
It uses the Hugging Face models to generate image captions.
|
||||
|
||||
We need to install several python packages.
|
||||
|
||||
```bash
|
||||
pip install transformers pillow
|
||||
```
|
||||
|
||||
See a [usage example](/docs/integrations/document_loaders/image_captions).
|
||||
|
||||
```python
|
||||
from langchain_community.document_loaders import ImageCaptionLoader
|
||||
```
|
||||
|
||||
## Tools
|
||||
|
||||
@ -124,3 +157,12 @@ See a [usage example](/docs/integrations/tools/huggingface_tools).
|
||||
```python
|
||||
from langchain_community.agent_toolkits.load_tools import load_huggingface_tool
|
||||
```
|
||||
|
||||
### Hugging Face Text-to-Speech Model Inference.
|
||||
|
||||
> It is a wrapper around `OpenAI Text-to-Speech API`.
|
||||
|
||||
```python
|
||||
from langchain_community.tools.audio import HuggingFaceTextToSpeechModelInference
|
||||
```
|
||||
|
||||
|
22
docs/docs/integrations/providers/apple.mdx
Normal file
22
docs/docs/integrations/providers/apple.mdx
Normal file
@ -0,0 +1,22 @@
|
||||
# Apple
|
||||
|
||||
>[Apple Inc. (Wikipedia)](https://en.wikipedia.org/wiki/Apple_Inc.) is an American
|
||||
> multinational corporation and technology company.
|
||||
>
|
||||
> [iMessage (Wikipedia)](https://en.wikipedia.org/wiki/IMessage) is an instant
|
||||
> messaging service developed by Apple Inc. and launched in 2011.
|
||||
> `iMessage` functions exclusively on Apple platforms.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
See [setup instructions](/docs/integrations/chat_loaders/imessage).
|
||||
|
||||
## Chat loader
|
||||
|
||||
It loads chat sessions from the `iMessage` `chat.db` `SQLite` file.
|
||||
|
||||
See a [usage example](/docs/integrations/chat_loaders/imessage).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_loaders.imessage import IMessageChatLoader
|
||||
```
|
@ -1,69 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Nomic\n",
|
||||
"\n",
|
||||
"Nomic currently offers two products:\n",
|
||||
"\n",
|
||||
"- Atlas: their Visual Data Engine\n",
|
||||
"- GPT4All: their Open Source Edge Language Model Ecosystem\n",
|
||||
"\n",
|
||||
"The Nomic integration exists in its own [partner package](https://pypi.org/project/langchain-nomic/). You can install it with:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install -qU langchain-nomic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Currently, you can import their hosted [embedding model](/docs/integrations/text_embedding/nomic) as follows:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "y8ku6X96sebl"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_nomic import NomicEmbeddings"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
58
docs/docs/integrations/providers/nomic.mdx
Normal file
58
docs/docs/integrations/providers/nomic.mdx
Normal file
@ -0,0 +1,58 @@
|
||||
# Nomic
|
||||
|
||||
>[Nomic](https://www.nomic.ai/) builds tools that enable everyone to interact with AI scale datasets and run AI models on consumer computers.
|
||||
>
|
||||
>`Nomic` currently offers two products:
|
||||
>
|
||||
>- `Atlas`: the Visual Data Engine
|
||||
>- `GPT4All`: the Open Source Edge Language Model Ecosystem
|
||||
|
||||
The Nomic integration exists in two partner packages: [langchain-nomic](https://pypi.org/project/langchain-nomic/)
|
||||
and in [langchain-community](https://pypi.org/project/langchain-community/).
|
||||
|
||||
## Installation
|
||||
|
||||
You can install them with:
|
||||
|
||||
```bash
|
||||
pip install -U langchain-nomic
|
||||
pip install -U langchain-community
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
### GPT4All
|
||||
|
||||
See [a usage example](/docs/integrations/llms/gpt4all).
|
||||
|
||||
```python
|
||||
from langchain_community.llms import GPT4All
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
### NomicEmbeddings
|
||||
|
||||
See [a usage example](/docs/integrations/text_embedding/nomic).
|
||||
|
||||
```python
|
||||
from langchain_nomic import NomicEmbeddings
|
||||
```
|
||||
|
||||
### GPT4All
|
||||
|
||||
See [a usage example](/docs/integrations/text_embedding/gpt4all).
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings import GPT4AllEmbeddings
|
||||
```
|
||||
|
||||
## Vector store
|
||||
|
||||
### Atlas
|
||||
|
||||
See [a usage example and installation instructions](/docs/integrations/vectorstores/atlas).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import AtlasDB
|
||||
```
|
34
docs/docs/integrations/providers/transwarp.mdx
Normal file
34
docs/docs/integrations/providers/transwarp.mdx
Normal file
@ -0,0 +1,34 @@
|
||||
# Transwarp
|
||||
|
||||
>[Transwarp](https://www.transwarp.cn/en/introduction) aims to build
|
||||
> enterprise-level big data and AI infrastructure software,
|
||||
> to shape the future of data world. It provides enterprises with
|
||||
> infrastructure software and services around the whole data lifecycle,
|
||||
> including integration, storage, governance, modeling, analysis,
|
||||
> mining and circulation.
|
||||
>
|
||||
> `Transwarp` focuses on technology research and
|
||||
> development and has accumulated core technologies in these aspects:
|
||||
> distributed computing, SQL compilations, database technology,
|
||||
> unification for multi-model data management, container-based cloud computing,
|
||||
> and big data analytics and intelligence.
|
||||
|
||||
## Installation
|
||||
|
||||
You have to install several python packages:
|
||||
|
||||
```bash
|
||||
pip install -U tiktoken hippo-api
|
||||
```
|
||||
|
||||
and get the connection configuration.
|
||||
|
||||
## Vector stores
|
||||
|
||||
### Hippo
|
||||
|
||||
See [a usage example and installation instructions](/docs/integrations/vectorstores/hippo).
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores.hippo import Hippo
|
||||
```
|
@ -6,45 +6,18 @@
|
||||
"source": [
|
||||
"# Upstage\n",
|
||||
"\n",
|
||||
"[Upstage](https://upstage.ai) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components. \n"
|
||||
">[Upstage](https://upstage.ai) is a leading artificial intelligence (AI) company specializing in delivering above-human-grade performance LLM components.\n",
|
||||
">\n",
|
||||
">**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.\n",
|
||||
"\n",
|
||||
">Other than Solar, Upstage also offers features for real-world RAG (retrieval-augmented generation), such as **Groundedness Check** and **Layout Analysis**. \n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Solar LLM\n",
|
||||
"\n",
|
||||
"**Solar Mini Chat** is a fast yet powerful advanced large language model focusing on English and Korean. It has been specifically fine-tuned for multi-turn chat purposes, showing enhanced performance across a wide range of natural language processing tasks, like multi-turn conversation or tasks that require an understanding of long contexts, such as RAG (Retrieval-Augmented Generation), compared to other models of a similar size. This fine-tuning equips it with the ability to handle longer conversations more effectively, making it particularly adept for interactive applications.\n",
|
||||
"\n",
|
||||
"Other than Solar, Upstage also offers features for real-world RAG (retrieval-augmented generation), such as **Groundedness Check** and **Layout Analysis**. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"Install `langchain-upstage` package:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -qU langchain-core langchain-upstage\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get [API Keys](https://console.upstage.ai) and set environment variable `UPSTAGE_API_KEY`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Upstage LangChain integrations\n",
|
||||
"### Upstage LangChain integrations\n",
|
||||
"\n",
|
||||
"| API | Description | Import | Example usage |\n",
|
||||
"| --- | --- | --- | --- |\n",
|
||||
@ -60,9 +33,20 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Quick Examples\n",
|
||||
"## Installation and Setup\n",
|
||||
"\n",
|
||||
"### Environment Setup"
|
||||
"Install `langchain-upstage` package:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -qU langchain-core langchain-upstage\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Get [API Keys](https://console.upstage.ai) and set environment variable `UPSTAGE_API_KEY`."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -80,8 +64,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat models\n",
|
||||
"\n",
|
||||
"### Chat\n"
|
||||
"### Solar LLM\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/chat/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -101,10 +88,9 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Embedding models\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"### Text embedding\n",
|
||||
"\n"
|
||||
"See [a usage example](/docs/integrations/text_embedding/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -134,7 +120,45 @@
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Groundedness Check"
|
||||
"## Document loader\n",
|
||||
"\n",
|
||||
"### Layout Analysis\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/document_loaders/upstage)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
|
||||
"\n",
|
||||
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
|
||||
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
|
||||
"\n",
|
||||
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
|
||||
"docs = layzer.load() # or layzer.lazy_load()\n",
|
||||
"\n",
|
||||
"for doc in docs[:3]:\n",
|
||||
" print(doc)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"## Tools\n",
|
||||
"\n",
|
||||
"### Groundedness Check\n",
|
||||
"\n",
|
||||
"See [a usage example](/docs/integrations/tools/upstage_groundedness_check)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -159,36 +183,6 @@
|
||||
"response = groundedness_check.invoke(request_input)\n",
|
||||
"print(response)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"### Layout Analysis"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_upstage import UpstageLayoutAnalysisLoader\n",
|
||||
"\n",
|
||||
"file_path = \"/PATH/TO/YOUR/FILE.pdf\"\n",
|
||||
"layzer = UpstageLayoutAnalysisLoader(file_path, split=\"page\")\n",
|
||||
"\n",
|
||||
"# For improved memory efficiency, consider using the lazy_load method to load documents page by page.\n",
|
||||
"docs = layzer.load() # or layzer.lazy_load()\n",
|
||||
"\n",
|
||||
"for doc in docs[:3]:\n",
|
||||
" print(doc)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@ -210,7 +204,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
Loading…
Reference in New Issue
Block a user