mirror of
https://github.com/hwchase17/langchain
synced 2024-11-10 01:10:59 +00:00
docs: integrations
references update (#25217)
Added missed provider pages. Fixed formats and added descriptions and links. --------- Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
parent
5f5e8c9a60
commit
4a812e3193
@ -4,9 +4,23 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatLlamaCpp\n",
|
||||
"# Llama.cpp\n",
|
||||
"\n",
|
||||
"This notebook provides a quick overview for getting started with chat model intergrated with [llama cpp python](https://github.com/abetlen/llama-cpp-python)."
|
||||
">[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`\n",
|
||||
">[llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
|
||||
">\n",
|
||||
">This package provides:\n",
|
||||
">\n",
|
||||
"> - Low-level access to C API via ctypes interface.\n",
|
||||
"> - High-level Python API for text completion\n",
|
||||
"> - `OpenAI`-like API\n",
|
||||
"> - `LangChain` compatibility\n",
|
||||
"> - `LlamaIndex` compatibility\n",
|
||||
"> - OpenAI compatible web server\n",
|
||||
"> - Local Copilot replacement\n",
|
||||
"> - Function Calling support\n",
|
||||
"> - Vision API support\n",
|
||||
"> - Multiple Models\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -410,7 +424,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.8"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -99,7 +99,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
"version": "3.10.12"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
|
@ -17,7 +17,7 @@
|
||||
"source": [
|
||||
"# ChatPerplexity\n",
|
||||
"\n",
|
||||
"This notebook covers how to get started with Perplexity chat models."
|
||||
"This notebook covers how to get started with `Perplexity` chat models."
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -37,17 +37,31 @@
|
||||
"from langchain_core.prompts import ChatPromptTemplate"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b26e2035-2f81-4451-ba44-fa2e2d5aeb62",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d986aac6-1bae-4608-8514-d3ba5b35b10e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chat = ChatPerplexity(\n",
|
||||
" temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3-sonar-small-32k-online\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "97a8ce3a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"chat = ChatPerplexity(temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3-sonar-small-32k-online\")\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You can check a list of available models [here](https://docs.perplexity.ai/docs/model-cards). For reproducibility, we can set the API key dynamically by taking it as an input in this notebook."
|
||||
]
|
||||
},
|
||||
@ -221,7 +235,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.18"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
@ -1,26 +1,50 @@
|
||||
# Llama.cpp
|
||||
|
||||
This page covers how to use [llama.cpp](https://github.com/ggerganov/llama.cpp) within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Llama-cpp wrappers.
|
||||
>[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`
|
||||
>[llama.cpp](https://github.com/ggerganov/llama.cpp).
|
||||
>
|
||||
>This package provides:
|
||||
>
|
||||
> - Low-level access to C API via ctypes interface.
|
||||
> - High-level Python API for text completion
|
||||
> - `OpenAI`-like API
|
||||
> - `LangChain` compatibility
|
||||
> - `LlamaIndex` compatibility
|
||||
> - OpenAI compatible web server
|
||||
> - Local Copilot replacement
|
||||
> - Function Calling support
|
||||
> - Vision API support
|
||||
> - Multiple Models
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python package with `pip install llama-cpp-python`
|
||||
|
||||
- Install the Python package
|
||||
```bash
|
||||
pip install llama-cpp-python
|
||||
````
|
||||
- Download one of the [supported models](https://github.com/ggerganov/llama.cpp#description) and convert them to the llama.cpp format per the [instructions](https://github.com/ggerganov/llama.cpp)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/llamacpp).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatLlamaCpp
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
See a [usage example](/docs/integrations/llms/llamacpp).
|
||||
|
||||
There exists a LlamaCpp LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain_community.llms import LlamaCpp
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp)
|
||||
|
||||
### Embeddings
|
||||
## Embedding models
|
||||
|
||||
See a [usage example](/docs/integrations/text_embedding/llamacpp).
|
||||
|
||||
There exists a LlamaCpp Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain_community.embeddings import LlamaCppEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp)
|
||||
|
21
docs/docs/integrations/providers/maritalk.mdx
Normal file
21
docs/docs/integrations/providers/maritalk.mdx
Normal file
@ -0,0 +1,21 @@
|
||||
# MariTalk
|
||||
|
||||
>[MariTalk](https://www.maritaca.ai/en) is an LLM-based chatbot trained to meet the needs of Brazil.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
You have to get the MariTalk API key.
|
||||
|
||||
You also need to install the `httpx` Python package.
|
||||
|
||||
```bash
|
||||
pip install httpx
|
||||
```
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/maritalk).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatMaritalk
|
||||
```
|
34
docs/docs/integrations/providers/mlx.mdx
Normal file
34
docs/docs/integrations/providers/mlx.mdx
Normal file
@ -0,0 +1,34 @@
|
||||
# MLX
|
||||
|
||||
>[MLX](https://ml-explore.github.io/mlx/build/html/index.html) is a `NumPy`-like array framework
|
||||
> designed for efficient and flexible machine learning on `Apple` silicon,
|
||||
> brought to you by `Apple machine learning research`.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Install several Python packages:
|
||||
|
||||
```bash
|
||||
pip install mlx-lm transformers huggingface_hub
|
||||
````
|
||||
|
||||
|
||||
## Chat models
|
||||
|
||||
|
||||
See a [usage example](/docs/integrations/chat/mlx).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models.mlx import ChatMLX
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
### MLX Local Pipelines
|
||||
|
||||
See a [usage example](/docs/integrations/llms/mlx_pipelines).
|
||||
|
||||
```python
|
||||
from langchain_community.llms.mlx_pipeline import MLXPipeline
|
||||
```
|
37
docs/docs/integrations/providers/octoai.mdx
Normal file
37
docs/docs/integrations/providers/octoai.mdx
Normal file
@ -0,0 +1,37 @@
|
||||
# OctoAI
|
||||
|
||||
>[OctoAI](https://docs.octoai.cloud/docs) offers easy access to efficient compute
|
||||
> and enables users to integrate their choice of AI models into applications.
|
||||
> The `OctoAI` compute service helps you run, tune, and scale AI applications easily.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- Install the `openai` Python package:
|
||||
```bash
|
||||
pip install openai
|
||||
````
|
||||
- Register on `OctoAI` and get an API Token from [your OctoAI account page](https://octoai.cloud/settings).
|
||||
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/octoai).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatOctoAI
|
||||
```
|
||||
|
||||
## LLMs
|
||||
|
||||
See a [usage example](/docs/integrations/llms/octoai).
|
||||
|
||||
```python
|
||||
from langchain_community.llms.octoai_endpoint import OctoAIEndpoint
|
||||
```
|
||||
|
||||
## Embedding models
|
||||
|
||||
```python
|
||||
from langchain_community.embeddings.octoai_embeddings import OctoAIEmbeddings
|
||||
```
|
25
docs/docs/integrations/providers/perplexity.mdx
Normal file
25
docs/docs/integrations/providers/perplexity.mdx
Normal file
@ -0,0 +1,25 @@
|
||||
# Perplexity
|
||||
|
||||
>[Perplexity](https://www.perplexity.ai/pro) is the most powerful way to search
|
||||
> the internet with unlimited Pro Search, upgraded AI models, unlimited file upload,
|
||||
> image generation, and API credits.
|
||||
>
|
||||
> You can check a [list of available models](https://docs.perplexity.ai/docs/model-cards).
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Install a Python package:
|
||||
|
||||
```bash
|
||||
pip install openai
|
||||
````
|
||||
|
||||
Get your API key from [here](https://docs.perplexity.ai/docs/getting-started).
|
||||
|
||||
## Chat models
|
||||
|
||||
See a [usage example](/docs/integrations/chat/perplexity).
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatPerplexity
|
||||
```
|
@ -4,9 +4,23 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Llama-cpp\n",
|
||||
"# Llama.cpp\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Llama-cpp embeddings within LangChain"
|
||||
">[llama.cpp python](https://github.com/abetlen/llama-cpp-python) library is a simple Python bindings for `@ggerganov`\n",
|
||||
">[llama.cpp](https://github.com/ggerganov/llama.cpp).\n",
|
||||
">\n",
|
||||
">This package provides:\n",
|
||||
">\n",
|
||||
"> - Low-level access to C API via ctypes interface.\n",
|
||||
"> - High-level Python API for text completion\n",
|
||||
"> - `OpenAI`-like API\n",
|
||||
"> - `LangChain` compatibility\n",
|
||||
"> - `LlamaIndex` compatibility\n",
|
||||
"> - OpenAI compatible web server\n",
|
||||
"> - Local Copilot replacement\n",
|
||||
"> - Function Calling support\n",
|
||||
"> - Vision API support\n",
|
||||
"> - Multiple Models\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -80,9 +94,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user