mirror of
https://github.com/hwchase17/langchain
synced 2024-11-06 03:20:49 +00:00
docs: ecosystem/integrations
update 5 (#5752)
- added missed integration to `docs/ecosystem/integrations/` - updated notebooks to consistent format: changed titles, file names; added descriptions #### Who can review? @hwchase17 @dev2049
This commit is contained in:
parent
aea090045b
commit
92a5f00ffb
@ -1,4 +1,4 @@
|
|||||||
# Bedrock
|
# Amazon Bedrock
|
||||||
|
|
||||||
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
|
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
|
||||||
|
|
||||||
@ -18,7 +18,7 @@ from langchain import Bedrock
|
|||||||
|
|
||||||
## Text Embedding Models
|
## Text Embedding Models
|
||||||
|
|
||||||
See a [usage example](../modules/models/text_embedding/examples/bedrock.ipynb).
|
See a [usage example](../modules/models/text_embedding/examples/amazon_bedrock.ipynb).
|
||||||
```python
|
```python
|
||||||
from langchain.embeddings import BedrockEmbeddings
|
from langchain.embeddings import BedrockEmbeddings
|
||||||
```
|
```
|
26
docs/integrations/anthropic.md
Normal file
26
docs/integrations/anthropic.md
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
# Anthropic
|
||||||
|
|
||||||
|
>[Anthropic](https://en.wikipedia.org/wiki/Anthropic) is an American artificial intelligence (AI) startup and
|
||||||
|
> public-benefit corporation, founded by former members of OpenAI. `Anthropic` specializes in developing general AI
|
||||||
|
> systems and language models, with a company ethos of responsible AI usage.
|
||||||
|
> `Anthropic` develops a chatbot, named `Claude`. Similar to `ChatGPT`, `Claude` uses a messaging
|
||||||
|
> interface where users can submit questions or requests and receive highly detailed and relevant responses.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install anthropic
|
||||||
|
```
|
||||||
|
|
||||||
|
See the [setup documentation](https://console.anthropic.com/docs/access).
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Chat Models
|
||||||
|
|
||||||
|
See a [usage example](../modules/models/chat/integrations/anthropic.ipynb)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.chat_models import ChatAnthropic
|
||||||
|
```
|
@ -1,7 +1,8 @@
|
|||||||
# Beam
|
# Beam
|
||||||
|
|
||||||
This page covers how to use Beam within LangChain.
|
>[Beam](https://docs.beam.cloud/introduction) makes it easy to run code on GPUs, deploy scalable web APIs,
|
||||||
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
|
> schedule cron jobs, and run massively parallel workloads — without managing any infrastructure.
|
||||||
|
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
|
|
||||||
@ -9,19 +10,19 @@ It is broken into two parts: installation and setup, and then references to spec
|
|||||||
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
|
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
|
||||||
- Register API keys with `beam configure`
|
- Register API keys with `beam configure`
|
||||||
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
|
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
|
||||||
- Install the Beam SDK `pip install beam-sdk`
|
- Install the Beam SDK:
|
||||||
|
```bash
|
||||||
|
pip install beam-sdk
|
||||||
|
```
|
||||||
|
|
||||||
## Wrappers
|
## LLM
|
||||||
|
|
||||||
### LLM
|
|
||||||
|
|
||||||
There exists a Beam LLM wrapper, which you can access with
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from langchain.llms.beam import Beam
|
from langchain.llms.beam import Beam
|
||||||
```
|
```
|
||||||
|
|
||||||
## Define your Beam app.
|
### Example of the Beam app
|
||||||
|
|
||||||
This is the environment you’ll be developing against once you start the app.
|
This is the environment you’ll be developing against once you start the app.
|
||||||
It's also used to define the maximum response length from the model.
|
It's also used to define the maximum response length from the model.
|
||||||
@ -44,7 +45,7 @@ llm = Beam(model_name="gpt2",
|
|||||||
verbose=False)
|
verbose=False)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Deploy your Beam app
|
### Deploy the Beam app
|
||||||
|
|
||||||
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
|
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
|
||||||
|
|
||||||
@ -52,9 +53,9 @@ Once defined, you can deploy your Beam app by calling your model's `_deploy()` m
|
|||||||
llm._deploy()
|
llm._deploy()
|
||||||
```
|
```
|
||||||
|
|
||||||
## Call your Beam app
|
### Call the Beam app
|
||||||
|
|
||||||
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
|
Once a beam model is deployed, it can be called by calling your model's `_call()` method.
|
||||||
This returns the GPT2 text response to your prompt.
|
This returns the GPT2 text response to your prompt.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
24
docs/integrations/google_vertex_ai.md
Normal file
24
docs/integrations/google_vertex_ai.md
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# Google Vertex AI
|
||||||
|
|
||||||
|
>[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) is a machine learning (ML)
|
||||||
|
> platform that lets you train and deploy ML models and AI applications.
|
||||||
|
> `Vertex AI` combines data engineering, data science, and ML engineering workflows, enabling your teams to
|
||||||
|
> collaborate using a common toolset.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install google-cloud-aiplatform
|
||||||
|
```
|
||||||
|
|
||||||
|
See the [setup instructions](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
|
||||||
|
|
||||||
|
|
||||||
|
## Chat Models
|
||||||
|
|
||||||
|
See a [usage example](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.chat_models import ChatVertexAI
|
||||||
|
```
|
@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
|||||||
```python
|
```python
|
||||||
from langchain.embeddings import HuggingFaceHubEmbeddings
|
from langchain.embeddings import HuggingFaceHubEmbeddings
|
||||||
```
|
```
|
||||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingfacehub.ipynb)
|
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingface_hub.ipynb)
|
||||||
|
|
||||||
### Tokenizer
|
### Tokenizer
|
||||||
|
|
||||||
|
@ -35,7 +35,6 @@ from langchain.llms import AzureOpenAI
|
|||||||
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Text Embedding Model
|
## Text Embedding Model
|
||||||
|
|
||||||
```python
|
```python
|
||||||
@ -44,6 +43,14 @@ from langchain.embeddings import OpenAIEmbeddings
|
|||||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
||||||
|
|
||||||
|
|
||||||
|
## Chat Model
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.chat_models import ChatOpenAI
|
||||||
|
```
|
||||||
|
For a more detailed walkthrough of this, see [this notebook](../modules/models/chat/integrations/openai.ipynb)
|
||||||
|
|
||||||
|
|
||||||
## Tokenizer
|
## Tokenizer
|
||||||
|
|
||||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||||
|
@ -1,19 +1,23 @@
|
|||||||
# Prediction Guard
|
# Prediction Guard
|
||||||
|
|
||||||
This page covers how to use the Prediction Guard ecosystem within LangChain.
|
>[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments.
|
||||||
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
|
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
- Install the Python SDK with `pip install predictionguard`
|
- Install the Python SDK:
|
||||||
|
```bash
|
||||||
|
pip install predictionguard
|
||||||
|
```
|
||||||
|
|
||||||
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
|
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
|
||||||
|
|
||||||
## LLM Wrapper
|
## LLM
|
||||||
|
|
||||||
There exists a Prediction Guard LLM wrapper, which you can access with
|
|
||||||
```python
|
```python
|
||||||
from langchain.llms import PredictionGuard
|
from langchain.llms import PredictionGuard
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Example
|
||||||
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
|
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
|
||||||
```python
|
```python
|
||||||
pgllm = PredictionGuard(model="MPT-7B-Instruct")
|
pgllm = PredictionGuard(model="MPT-7B-Instruct")
|
||||||
@ -24,14 +28,12 @@ You can also provide your access token directly as an argument:
|
|||||||
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
|
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
|
||||||
```
|
```
|
||||||
|
|
||||||
Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:
|
Also, you can provide an "output" argument that is used to structure/ control the output of the LLM:
|
||||||
```python
|
```python
|
||||||
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
|
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
|
||||||
```
|
```
|
||||||
|
|
||||||
## Example usage
|
#### Basic usage of the controlled or guarded LLM:
|
||||||
|
|
||||||
Basic usage of the controlled or guarded LLM wrapper:
|
|
||||||
```python
|
```python
|
||||||
import os
|
import os
|
||||||
|
|
||||||
@ -72,7 +74,7 @@ pgllm = PredictionGuard(model="MPT-7B-Instruct",
|
|||||||
pgllm(prompt.format(query="What kind of post is this?"))
|
pgllm(prompt.format(query="What kind of post is this?"))
|
||||||
```
|
```
|
||||||
|
|
||||||
Basic LLM Chaining with the Prediction Guard wrapper:
|
#### Basic LLM Chaining with the Prediction Guard:
|
||||||
```python
|
```python
|
||||||
import os
|
import os
|
||||||
|
|
||||||
|
@ -1,31 +1,35 @@
|
|||||||
# PromptLayer
|
# PromptLayer
|
||||||
|
|
||||||
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
|
>[PromptLayer](https://docs.promptlayer.com/what-is-promptlayer/wxpF9EZkUwvdkwvVE9XEvC/how-promptlayer-works/dvgGSxNe6nB1jj8mUVbG8r)
|
||||||
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
|
> is a devtool that allows you to track, manage, and share your GPT prompt engineering.
|
||||||
|
> It acts as a middleware between your code and OpenAI's python library, recording all your API requests
|
||||||
|
> and saving relevant metadata for easy exploration and search in the [PromptLayer](https://www.promptlayer.com) dashboard.
|
||||||
|
|
||||||
## Installation and Setup
|
## Installation and Setup
|
||||||
|
|
||||||
If you want to work with PromptLayer:
|
- Install the `promptlayer` python library
|
||||||
- Install the promptlayer python library `pip install promptlayer`
|
```bash
|
||||||
|
pip install promptlayer
|
||||||
|
```
|
||||||
- Create a PromptLayer account
|
- Create a PromptLayer account
|
||||||
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
|
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
|
||||||
|
|
||||||
## Wrappers
|
|
||||||
|
|
||||||
### LLM
|
## LLM
|
||||||
|
|
||||||
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
|
|
||||||
```python
|
```python
|
||||||
from langchain.llms import PromptLayerOpenAI
|
from langchain.llms import PromptLayerOpenAI
|
||||||
```
|
```
|
||||||
|
|
||||||
To tag your requests, use the argument `pl_tags` when instanializing the LLM
|
### Example
|
||||||
|
|
||||||
|
To tag your requests, use the argument `pl_tags` when instantiating the LLM
|
||||||
```python
|
```python
|
||||||
from langchain.llms import PromptLayerOpenAI
|
from langchain.llms import PromptLayerOpenAI
|
||||||
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
|
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
|
||||||
```
|
```
|
||||||
|
|
||||||
To get the PromptLayer request id, use the argument `return_pl_id` when instanializing the LLM
|
To get the PromptLayer request id, use the argument `return_pl_id` when instantiating the LLM
|
||||||
```python
|
```python
|
||||||
from langchain.llms import PromptLayerOpenAI
|
from langchain.llms import PromptLayerOpenAI
|
||||||
llm = PromptLayerOpenAI(return_pl_id=True)
|
llm = PromptLayerOpenAI(return_pl_id=True)
|
||||||
@ -42,8 +46,14 @@ You can use the PromptLayer request ID to add a prompt, score, or other metadata
|
|||||||
|
|
||||||
This LLM is identical to the [OpenAI LLM](./openai.md), except that
|
This LLM is identical to the [OpenAI LLM](./openai.md), except that
|
||||||
- all your requests will be logged to your PromptLayer account
|
- all your requests will be logged to your PromptLayer account
|
||||||
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
|
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
|
||||||
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||||
|
|
||||||
|
## Chat Model
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.chat_models import PromptLayerChatOpenAI
|
||||||
|
```
|
||||||
|
|
||||||
|
See a [usage example](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb).
|
||||||
|
|
||||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb) and `PromptLayerOpenAIChat`
|
|
||||||
|
22
docs/integrations/tensorflow_hub.md
Normal file
22
docs/integrations/tensorflow_hub.md
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Tensorflow Hub
|
||||||
|
|
||||||
|
>[TensorFlow Hub](https://www.tensorflow.org/hub) is a repository of trained machine learning models ready for fine-tuning and deployable anywhere.
|
||||||
|
|
||||||
|
>[TensorFlow Hub](https://tfhub.dev/) lets you search and discover hundreds of trained, ready-to-deploy machine learning models in one place.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install tensorflow-hub
|
||||||
|
pip install tensorflow_text
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Text Embedding Models
|
||||||
|
|
||||||
|
See a [usage example](../modules/models/text_embedding/examples/tensorflowhub.ipynb)
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.embeddings import TensorflowHubEmbeddings
|
||||||
|
```
|
@ -7,7 +7,12 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Anthropic\n",
|
"# Anthropic\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook covers how to get started with Anthropic chat models."
|
"\n",
|
||||||
|
">[Anthropic](https://en.wikipedia.org/wiki/Anthropic) is an American artificial intelligence (AI) startup and \n",
|
||||||
|
"> public-benefit corporation, founded by former members of OpenAI. `Anthropic` specializes in developing general AI \n",
|
||||||
|
"> systems and language models, with a company ethos of responsible AI usage.\n",
|
||||||
|
"> `Anthropic` develops a chatbot, named `Claude`. Similar to `ChatGPT`, `Claude` uses a messaging \n",
|
||||||
|
"> interface where users can submit questions or requests and receive highly detailed and relevant responses.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -171,7 +176,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.3"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -4,9 +4,14 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Google Cloud Platform Vertex AI PaLM \n",
|
"# Google Vertex AI PaLM \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
">[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) is a machine learning (ML) \n",
|
||||||
|
"> platform that lets you train and deploy ML models and AI applications. \n",
|
||||||
|
"> `Vertex AI` combines data engineering, data science, and ML engineering workflows, enabling your teams to \n",
|
||||||
|
"> collaborate using a common toolset.\n",
|
||||||
|
"\n",
|
||||||
|
"**Note:** This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
|
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -157,7 +162,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -1,18 +1,19 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "959300d4",
|
"id": "959300d4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# PromptLayer ChatOpenAI\n",
|
"# PromptLayer ChatOpenAI\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your ChatOpenAI requests."
|
">[PromptLayer](https://docs.promptlayer.com/what-is-promptlayer/wxpF9EZkUwvdkwvVE9XEvC/how-promptlayer-works/dvgGSxNe6nB1jj8mUVbG8r) \n",
|
||||||
|
"> is a devtool that allows you to track, manage, and share your GPT prompt engineering. \n",
|
||||||
|
"> It acts as a middleware between your code and OpenAI's python library, recording all your API requests \n",
|
||||||
|
"> and saving relevant metadata for easy exploration and search in the [PromptLayer](https://www.promptlayer.com) dashboard."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "6a45943e",
|
"id": "6a45943e",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -56,7 +57,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "8564ce7d",
|
"id": "8564ce7d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -78,7 +78,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "bf0294de",
|
"id": "bf0294de",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -110,7 +109,6 @@
|
|||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "a2d76826",
|
"id": "a2d76826",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -125,7 +123,6 @@
|
|||||||
"source": []
|
"source": []
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "c43803d1",
|
"id": "c43803d1",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
@ -161,7 +158,7 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "base",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
@ -175,7 +172,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -6,7 +6,11 @@
|
|||||||
"id": "J-yvaDTmTTza"
|
"id": "J-yvaDTmTTza"
|
||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# Beam integration for langchain\n",
|
"# Beam\n",
|
||||||
|
"\n",
|
||||||
|
">[Beam](https://docs.beam.cloud/introduction) makes it easy to run code on GPUs, deploy scalable web APIs, \n",
|
||||||
|
"> schedule cron jobs, and run massively parallel workloads — without managing any infrastructure.\n",
|
||||||
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.\n",
|
"Calls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -151,9 +155,9 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.3"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 1
|
"nbformat_minor": 4
|
||||||
}
|
}
|
||||||
|
@ -5,9 +5,9 @@
|
|||||||
"id": "959300d4",
|
"id": "959300d4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Hugging Face Local Pipelines\n",
|
"# Hugging Face Pipeline\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Hugging Face models can be run locally through the `HuggingFacePipeline` class.\n",
|
"`Hugging Face` models can be run locally through the `HuggingFacePipeline` class.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
|
"The [Hugging Face Model Hub](https://huggingface.co/models) hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -137,7 +137,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.2"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -5,9 +5,9 @@
|
|||||||
"id": "fdd7864c-93e6-4eb4-a923-b80d2ae4377d",
|
"id": "fdd7864c-93e6-4eb4-a923-b80d2ae4377d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Structured Decoding with JSONFormer\n",
|
"# Jsonformer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[JSONFormer](https://github.com/1rgs/jsonformer) is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.\n",
|
"[Jsonformer](https://github.com/1rgs/jsonformer) is a library that wraps local `HuggingFace pipeline` models for structured decoding of a subset of the JSON Schema.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"It works by filling in the structure tokens and then sampling the content tokens from the model.\n",
|
"It works by filling in the structure tokens and then sampling the content tokens from the model.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -272,7 +272,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.2"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -1,19 +1,16 @@
|
|||||||
{
|
{
|
||||||
"nbformat": 4,
|
|
||||||
"nbformat_minor": 0,
|
|
||||||
"metadata": {
|
|
||||||
"colab": {
|
|
||||||
"provenance": []
|
|
||||||
},
|
|
||||||
"kernelspec": {
|
|
||||||
"name": "python3",
|
|
||||||
"display_name": "Python 3"
|
|
||||||
},
|
|
||||||
"language_info": {
|
|
||||||
"name": "python"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"cells": [
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"metadata": {
|
||||||
|
"id": "mesCTyhnJkNS"
|
||||||
|
},
|
||||||
|
"source": [
|
||||||
|
"# Prediction Guard\n",
|
||||||
|
"\n",
|
||||||
|
">[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments."
|
||||||
|
]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": null,
|
"execution_count": null,
|
||||||
@ -27,31 +24,26 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "2xe8JEUwA7_y"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import os\n",
|
"import os\n",
|
||||||
"\n",
|
"\n",
|
||||||
"import predictionguard as pg\n",
|
"import predictionguard as pg\n",
|
||||||
"from langchain.llms import PredictionGuard\n",
|
"from langchain.llms import PredictionGuard\n",
|
||||||
"from langchain import PromptTemplate, LLMChain"
|
"from langchain import PromptTemplate, LLMChain"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "2xe8JEUwA7_y"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"source": [
|
|
||||||
"# Basic LLM usage\n",
|
|
||||||
"\n"
|
|
||||||
],
|
|
||||||
"metadata": {
|
|
||||||
"id": "mesCTyhnJkNS"
|
|
||||||
}
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "kp_Ymnx1SnDG"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows\n",
|
"# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows\n",
|
||||||
"# you to access all the latest open access models (see https://docs.predictionguard.com)\n",
|
"# you to access all the latest open access models (see https://docs.predictionguard.com)\n",
|
||||||
@ -59,46 +51,46 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"# Your Prediction Guard API key. Get one at predictionguard.com\n",
|
"# Your Prediction Guard API key. Get one at predictionguard.com\n",
|
||||||
"os.environ[\"PREDICTIONGUARD_TOKEN\"] = \"<your Prediction Guard access token>\""
|
"os.environ[\"PREDICTIONGUARD_TOKEN\"] = \"<your Prediction Guard access token>\""
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "kp_Ymnx1SnDG"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"execution_count": null,
|
||||||
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "Ua7Mw1N4HcER"
|
"id": "Ua7Mw1N4HcER"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": [
|
||||||
|
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"execution_count": null,
|
||||||
"pgllm(\"Tell me a joke\")"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "Qo2p5flLHxrB"
|
"id": "Qo2p5flLHxrB"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": [
|
||||||
|
"pgllm(\"Tell me a joke\")"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
|
||||||
"# Control the output structure/ type of LLMs"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "EyBYaP_xTMXH"
|
"id": "EyBYaP_xTMXH"
|
||||||
}
|
},
|
||||||
|
"source": [
|
||||||
|
"# Control the output structure/ type of LLMs"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "55uxzhQSTPqF"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"template = \"\"\"Respond to the following query based on the context.\n",
|
"template = \"\"\"Respond to the following query based on the context.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -112,27 +104,27 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Result: \"\"\"\n",
|
"Result: \"\"\"\n",
|
||||||
"prompt = PromptTemplate(template=template, input_variables=[\"query\"])"
|
"prompt = PromptTemplate(template=template, input_variables=[\"query\"])"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "55uxzhQSTPqF"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"execution_count": null,
|
||||||
"# Without \"guarding\" or controlling the output of the LLM.\n",
|
|
||||||
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "yersskWbTaxU"
|
"id": "yersskWbTaxU"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": [
|
||||||
|
"# Without \"guarding\" or controlling the output of the LLM.\n",
|
||||||
|
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "PzxSbYwqTm2w"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# With \"guarding\" or controlling the output of the LLM. See the \n",
|
"# With \"guarding\" or controlling the output of the LLM. See the \n",
|
||||||
"# Prediction Guard docs (https://docs.predictionguard.com) to learn how to \n",
|
"# Prediction Guard docs (https://docs.predictionguard.com) to learn how to \n",
|
||||||
@ -148,35 +140,35 @@
|
|||||||
" ]\n",
|
" ]\n",
|
||||||
" })\n",
|
" })\n",
|
||||||
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
|
"pgllm(prompt.format(query=\"What kind of post is this?\"))"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "PzxSbYwqTm2w"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"source": [
|
|
||||||
"# Chaining"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "v3MzIUItJ8kV"
|
"id": "v3MzIUItJ8kV"
|
||||||
}
|
},
|
||||||
|
"source": [
|
||||||
|
"# Chaining"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [
|
"execution_count": null,
|
||||||
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
|
|
||||||
],
|
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "pPegEZExILrT"
|
"id": "pPegEZExILrT"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": [
|
||||||
|
"pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")"
|
||||||
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "suxw62y-J-bg"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"template = \"\"\"Question: {question}\n",
|
"template = \"\"\"Question: {question}\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -187,36 +179,55 @@
|
|||||||
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
|
"question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"llm_chain.predict(question=question)"
|
"llm_chain.predict(question=question)"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "suxw62y-J-bg"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"metadata": {
|
||||||
|
"id": "l2bc26KHKr7n"
|
||||||
|
},
|
||||||
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
|
"template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n",
|
||||||
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
|
"prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n",
|
||||||
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
|
"llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
|
"llm_chain.predict(adjective=\"sad\", subject=\"ducks\")"
|
||||||
],
|
]
|
||||||
"metadata": {
|
|
||||||
"id": "l2bc26KHKr7n"
|
|
||||||
},
|
|
||||||
"execution_count": null,
|
|
||||||
"outputs": []
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"source": [],
|
"execution_count": null,
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"id": "I--eSa2PLGqq"
|
"id": "I--eSa2PLGqq"
|
||||||
},
|
},
|
||||||
"execution_count": null,
|
"outputs": [],
|
||||||
"outputs": []
|
"source": []
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
|
"metadata": {
|
||||||
|
"colab": {
|
||||||
|
"provenance": []
|
||||||
|
},
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.6"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 4
|
||||||
}
|
}
|
@ -5,11 +5,10 @@
|
|||||||
"id": "fdd7864c-93e6-4eb4-a923-b80d2ae4377d",
|
"id": "fdd7864c-93e6-4eb4-a923-b80d2ae4377d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Structured Decoding with RELLM\n",
|
"# ReLLM\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[RELLM](https://github.com/r2d4/rellm) is a library that wraps local Hugging Face pipeline models for structured decoding.\n",
|
">[ReLLM](https://github.com/r2d4/rellm) is a library that wraps local Hugging Face pipeline models for structured decoding.\n",
|
||||||
"\n",
|
">It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression.\n",
|
||||||
"It works by generating tokens one at a time. At each step, it masks tokens that don't conform to the provided partial regular expression.\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"**Warning - this module is still experimental**"
|
"**Warning - this module is still experimental**"
|
||||||
@ -200,7 +199,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.2"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -5,7 +5,9 @@
|
|||||||
"id": "75e378f5-55d7-44b6-8e2e-6d7b8b171ec4",
|
"id": "75e378f5-55d7-44b6-8e2e-6d7b8b171ec4",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Bedrock Embeddings"
|
"# Amazon Bedrock\n",
|
||||||
|
"\n",
|
||||||
|
">[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -67,7 +69,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.10.11"
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
@ -93,7 +93,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Google Cloud Platform Vertex AI PaLM \n",
|
"# Google Vertex AI PaLM \n",
|
||||||
"\n",
|
"\n",
|
||||||
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -100,7 +100,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -5,8 +5,8 @@
|
|||||||
"id": "59428e05",
|
"id": "59428e05",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# InstructEmbeddings\n",
|
"# HuggingFace Instruct\n",
|
||||||
"Let's load the HuggingFace instruct Embeddings class."
|
"Let's load the `HuggingFace instruct Embeddings` class."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -85,7 +85,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
@ -1,11 +1,10 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# MosaicML embeddings\n",
|
"# MosaicML\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open source models, or deploy your own.\n",
|
"[MosaicML](https://docs.mosaicml.com/en/latest/inference.html) offers a managed inference service. You can either use a variety of open source models, or deploy your own.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -92,6 +91,11 @@
|
|||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
"name": "ipython",
|
"name": "ipython",
|
||||||
@ -101,9 +105,10 @@
|
|||||||
"mimetype": "text/x-python",
|
"mimetype": "text/x-python",
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3"
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.10.6"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 2
|
"nbformat_minor": 4
|
||||||
}
|
}
|
||||||
|
@ -5,9 +5,9 @@
|
|||||||
"id": "1f83f273",
|
"id": "1f83f273",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# SageMaker Endpoint Embeddings\n",
|
"# SageMaker Endpoint\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Let's load the SageMaker Endpoints Embeddings class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.\n",
|
"Let's load the `SageMaker Endpoints Embeddings` class. The class can be used if you host, e.g. your own Hugging Face model on SageMaker.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For instructions on how to do this, please see [here](https://www.philschmid.de/custom-inference-huggingface-sagemaker). **Note**: In order to handle batched requests, you will need to adjust the return line in the `predict_fn()` function within the custom `inference.py` script:\n",
|
"For instructions on how to do this, please see [here](https://www.philschmid.de/custom-inference-huggingface-sagemaker). **Note**: In order to handle batched requests, you will need to adjust the return line in the `predict_fn()` function within the custom `inference.py` script:\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -122,7 +122,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -1,12 +1,11 @@
|
|||||||
{
|
{
|
||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"attachments": {},
|
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "ed47bb62",
|
"id": "ed47bb62",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Sentence Transformers Embeddings\n",
|
"# Sentence Transformers\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[Sentence Transformers](https://www.sbert.net/) embeddings are called using the `HuggingFaceEmbeddings` integration. We have also added an alias for `SentenceTransformerEmbeddings` for users who are more familiar with directly using that package.\n",
|
"[Sentence Transformers](https://www.sbert.net/) embeddings are called using the `HuggingFaceEmbeddings` integration. We have also added an alias for `SentenceTransformerEmbeddings` for users who are more familiar with directly using that package.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -109,7 +108,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.16"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -6,7 +6,10 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Tensorflow Hub\n",
|
"# Tensorflow Hub\n",
|
||||||
"Let's load the TensorflowHub Embedding class."
|
"\n",
|
||||||
|
">[TensorFlow Hub](https://www.tensorflow.org/hub) is a repository of trained machine learning models ready for fine-tuning and deployable anywhere.\n",
|
||||||
|
"\n",
|
||||||
|
">[TensorFlow Hub](https://tfhub.dev/) lets you search and discover hundreds of trained, ready-to-deploy machine learning models in one place.\n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -105,7 +108,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.6"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
Loading…
Reference in New Issue
Block a user