docs: `integrations/providers/` update (#14315)

- added missed provider files (from `integrations/Callbacks`
- updated notebooks: added links; updated into consistent formats
pull/14626/head
Leonid Ganeline 10 months ago committed by GitHub
parent 6607cc6eab
commit 0f02e94565
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -7,8 +7,6 @@
"source": [
"# Argilla\n",
"\n",
"![Argilla - Open-source data platform for LLMs](https://argilla.io/og.png)\n",
"\n",
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
"> Using Argilla, everyone can build robust language models through faster data curation \n",
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
@ -410,7 +408,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -7,12 +7,9 @@
"source": [
"# Context\n",
"\n",
"![Context - User Analytics for LLM Powered Products](https://with.context.ai/langchain.png)\n",
">[Context](https://context.ai/) provides user analytics for LLM-powered products and features.\n",
"\n",
"[Context](https://context.ai/) provides user analytics for LLM powered products and features.\n",
"\n",
"With Context, you can start understanding your users and improving their experiences in less than 30 minutes.\n",
"\n"
"With `Context`, you can start understanding your users and improving their experiences in less than 30 minutes.\n"
]
},
{
@ -89,11 +86,9 @@
"metadata": {},
"source": [
"## Usage\n",
"### Using the Context callback within a chat model\n",
"\n",
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
"### Context callback within a chat model\n",
"\n",
"#### Example"
"The Context callback handler can be used to directly record transcripts between users and AI assistants."
]
},
{
@ -132,7 +127,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Using the Context callback within Chains\n",
"### Context callback within Chains\n",
"\n",
"The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\n",
"\n",
@ -149,9 +144,7 @@
">handler = ContextCallbackHandler(token)\n",
">chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n",
">chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\n",
">```\n",
"\n",
"#### Example"
">```\n"
]
},
{
@ -203,7 +196,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -7,12 +7,14 @@
"source": [
"# Infino\n",
"\n",
">[Infino](https://github.com/infinohq/infino) is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.\n",
"\n",
"This example shows how one can track the following while calling OpenAI and ChatOpenAI models via `LangChain` and [Infino](https://github.com/infinohq/infino):\n",
"\n",
"* prompt input,\n",
"* response from `ChatGPT` or any other `LangChain` model,\n",
"* latency,\n",
"* errors,\n",
"* prompt input\n",
"* response from `ChatGPT` or any other `LangChain` model\n",
"* latency\n",
"* errors\n",
"* number of tokens consumed"
]
},
@ -454,7 +456,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -4,6 +4,9 @@
"cell_type": "markdown",
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
},
"pycharm": {
"name": "#%% md\n"
}
@ -11,17 +14,14 @@
"source": [
"# Label Studio\n",
"\n",
"<div>\n",
"<img src=\"https://labelstudio-pub.s3.amazonaws.com/lc/open-source-data-labeling-platform.png\" width=\"400\"/>\n",
"</div>\n",
"\n",
"Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
">[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
"\n",
"In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\n",
"In this guide, you will learn how to connect a LangChain pipeline to `Label Studio` to:\n",
"\n",
"- Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Aggregate all input prompts, conversations, and responses in a single `Label Studio` project. This consolidates all the data in one place for easier labeling and analysis.\n",
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
"- Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
"- Evaluate model responses through human feedback. `Label Studio` provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
]
},
{
@ -362,9 +362,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "labelops",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "labelops"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -376,9 +376,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}

@ -1,6 +1,6 @@
# LLMonitor
[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
<video controls width='100%' >
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>

@ -7,11 +7,10 @@
"source": [
"# PromptLayer\n",
"\n",
"![PromptLayer](https://promptlayer.com/text_logo.png)\n",
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
">[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
]
@ -51,7 +50,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage\n",
"## Usage\n",
"\n",
"Getting started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
"1. `pl_tags` - an optional list of strings that will be tracked as tags on PromptLayer.\n",
@ -63,7 +62,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple OpenAI Example\n",
"## Simple OpenAI Example\n",
"\n",
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
]
@ -99,7 +98,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### GPT4All Example"
"## GPT4All Example"
]
},
{
@ -125,9 +124,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Featured Example\n",
"## Full Featured Example\n",
"\n",
"In this example we unlock more of the power of PromptLayer.\n",
"In this example, we unlock more of the power of `PromptLayer`.\n",
"\n",
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
"\n",
@ -182,7 +181,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "base",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -196,7 +195,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.10.12"
},
"vscode": {
"interpreter": {

@ -7,14 +7,15 @@
"source": [
"# SageMaker Tracking\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n",
">[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
">[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of `Amazon SageMaker` that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into `SageMaker Experiments`. Here, we use different scenarios to showcase the capability:\n",
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
"\n",
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",
"[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
"\n",
"In this notebook, we will create a single experiment to log the prompts from each scenario."
]
@ -899,9 +900,9 @@
],
"instance_type": "ml.t3.large",
"kernelspec": {
"display_name": "conda_pytorch_p310",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "conda_pytorch_p310"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -913,7 +914,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.10"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -9,12 +9,13 @@
"source": [
"# Trubrics\n",
"\n",
"![Trubrics](https://miro.medium.com/v2/resize:fit:720/format:webp/1*AhYbKO-v8F4u3hx2aDIqKg.png)\n",
"\n",
"[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models. In this guide we will go over how to setup the `TrubricsCallbackHandler`. \n",
">[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
"prompts & feedback on AI models.\n",
">\n",
">Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.\n",
"\n",
"Check out [our repo](https://github.com/trubrics/trubrics-sdk) for more information on Trubrics."
"In this guide, we will go over how to set up the `TrubricsCallbackHandler`. \n"
]
},
{
@ -347,9 +348,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "langchain"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@ -361,7 +362,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.12"
}
},
"nbformat": 4,

@ -0,0 +1,20 @@
# Context
>[Context](https://context.ai/) provides user analytics for LLM-powered products and features.
## Installation and Setup
We need to install the `context-python` Python package:
```bash
pip install context-python
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/context).
```python
from langchain.callbacks import ContextCallbackHandler
```

@ -0,0 +1,23 @@
# Label Studio
>[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
## Installation and Setup
See the [Label Studio installation guide](https://labelstud.io/guide/install) for installation options.
We need to install the `label-studio` and `label-studio-sdk-python` Python packages:
```bash
pip install label-studio label-studio-sdk
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/labelstudio).
```python
from langchain.callbacks import LabelStudioCallbackHandler
```

@ -0,0 +1,22 @@
# LLMonitor
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
## Installation and Setup
Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`.
Once you have it, set it as an environment variable by running:
```bash
export LLMONITOR_APP_ID="..."
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/llmonitor).
```python
from langchain.callbacks import LLMonitorCallbackHandler
```

@ -0,0 +1,22 @@
# Streamlit
> [Streamlit](https://streamlit.io/) is a faster way to build and share data apps.
> `Streamlit` turns data scripts into shareable web apps in minutes. All in pure Python. No frontend experience required.
> See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai).
## Installation and Setup
We need to install the `context-pstreamlit` Python package:
```bash
pip install context-pstreamlit
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/streamlit).
```python
from langchain.callbacks import StreamlitCallbackHandler
```

@ -0,0 +1,24 @@
# Trubrics
>[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user
prompts & feedback on AI models.
>
>Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.
## Installation and Setup
We need to install the `trubrics` Python package:
```bash
pip install trubrics
```
## Callbacks
See a [usage example](/docs/integrations/callbacks/trubrics).
```python
from langchain.callbacks import TrubricsCallbackHandler
```
Loading…
Cancel
Save