Harrison/llm integrations (#1039)

Co-authored-by: jped <jonathanped@gmail.com>
Co-authored-by: Justin Torre <justintorre75@gmail.com>
Co-authored-by: Ivan Vendrov <ivan@anthropic.com>
This commit is contained in:
Harrison Chase 2023-02-13 22:06:25 -08:00 committed by GitHub
parent ec727bf166
commit 88bebb4caa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
13 changed files with 348 additions and 37 deletions

BIN
docs/_static/HeliconeDashboard.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

BIN
docs/_static/HeliconeKeys.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

View File

@ -0,0 +1,21 @@
# Helicone
This page covers how to use the [Helicone](https://helicone.ai) within LangChain.
## What is Helicone?
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage.
![Helicone](../_static/HeliconeDashboard.png)
## Quick start
With your LangChain environment you can just add the following parameter.
```bash
export OPENAI_API_BASE="https://oai.hconeai.com/v1"
```
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs.
![Helicone](../_static/HeliconeKeys.png)

View File

@ -0,0 +1,31 @@
# PromptLayer
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
## Installation and Setup
If you want to work with PromptLayer:
- Install the promptlayer python library `pip install promptlayer`
- Create a PromptLayer account
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
## Wrappers
### LLM
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
```python
from langchain.llms import PromptLayerOpenAI
```
To tag your requests, use the argument `pl_tags` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer

View File

@ -17,6 +17,8 @@ The examples here are all "how-to" guides for how to integrate with various LLM
`Forefront AI <./integrations/forefrontai_example.html>`_: Covers how to utilize the Forefront AI wrapper.
`PromptLayer OpenAI <./integrations/promptlayer_openai.html>`_: Covers how to use `PromptLayer <https://promptlayer.com>`_ with Langchain.
.. toctree::
:maxdepth: 1

View File

@ -0,0 +1,81 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "959300d4",
"metadata": {},
"source": [
"# PromptLayer OpenAI\n",
"\n",
"This example showcases how to connect to [PromptLayer](https://www.promptlayer.com) to start recording your OpenAI requests."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3acf0069",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' to go outside\\n\\nUnfortunately, cats cannot go outside without being supervised by a human. Going outside can be dangerous for cats, as they may come into contact with cars, other animals, or other dangers. If you want to go outside, ask your human to take you on a supervised walk or to a safe, enclosed outdoor space.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import PromptLayerOpenAI\n",
"import promptlayer\n",
"import os\n",
"\n",
"# Set up API keys, you can get a promptlayer api key here: https://promptlayer.com/\n",
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY\"\n",
"promptlayer.api_key = \"YOUR_PROMPTLAYER_API_KEY\"\n",
"\n",
"# Optionally pass in pl_tags to track your requests\n",
"llm = PromptLayerOpenAI(pl_tags=[\"langchain\"])\n",
"\n",
"llm(\"I am a cat and I want\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae4559c7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
},
"vscode": {
"interpreter": {
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@ -24,6 +24,9 @@ The following use cases require specific installs and api keys:
- _CerebriumAI_:
- Install requirements with `pip install cerebrium`
- Get a Cerebrium api key and either set it as an environment variable (`CEREBRIUMAI_API_KEY`) or pass it to the LLM constructor as `cerebriumai_api_key`.
- _PromptLayer_:
- Install requirements with `pip install promptlayer` (be sure to be on version 0.1.62 or higher)
- Get an API key from [promptlayer.com](http://www.promptlayer.com) and set it using `promptlayer.api_key=<API KEY>`
- _SerpAPI_:
- Install requirements with `pip install google-search-results`
- Get a SerpAPI api key and either set it as an environment variable (`SERPAPI_API_KEY`) or pass it to the LLM constructor as `serpapi_api_key`.

View File

@ -13,6 +13,7 @@ from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain.llms.nlpcloud import NLPCloud
from langchain.llms.openai import AzureOpenAI, OpenAI
from langchain.llms.petals import Petals
from langchain.llms.promptlayer_openai import PromptLayerOpenAI
__all__ = [
"Anthropic",
@ -27,6 +28,7 @@ __all__ = [
"HuggingFacePipeline",
"AI21",
"AzureOpenAI",
"PromptLayerOpenAI",
]
type_to_cls_dict: Dict[str, Type[BaseLLM]] = {

View File

@ -8,7 +8,7 @@ from langchain.utils import get_from_dict_or_env
class Anthropic(LLM, BaseModel):
"""Wrapper around Anthropic large language models.
r"""Wrapper around Anthropic large language models.
To use, you should have the ``anthropic`` python package installed, and the
environment variable ``ANTHROPIC_API_KEY`` set with your API key, or pass
@ -16,9 +16,19 @@ class Anthropic(LLM, BaseModel):
Example:
.. code-block:: python
import anthropic
from langchain import Anthropic
anthropic = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
"""
client: Any #: :meta private:
@ -86,19 +96,32 @@ class Anthropic(LLM, BaseModel):
"""Return type of llm."""
return "anthropic"
def _call(
self, prompt: str, stop: Optional[List[str]] = None, instruct_mode: bool = True
) -> str:
r"""Call out to Anthropic's completion endpoint.
def _wrap_prompt(self, prompt: str) -> str:
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError("Please ensure the anthropic package is loaded")
if prompt.startswith(self.HUMAN_PROMPT):
return prompt # Already wrapped.
else:
return f"{self.HUMAN_PROMPT} {prompt}{self.AI_PROMPT} Sure, here you go:\n"
Will by default act like an instruction-following model, by wrapping the prompt
with Human: and Assistant: If you want to use for chat or few-shot, pass
in instruct_mode=False
def _get_anthropic_stop(self, stop: Optional[List[str]] = None) -> List[str]:
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError("Please ensure the anthropic package is loaded")
if stop is None:
stop = []
# Never want model to invent new turns of Human / Assistant dialog.
stop.extend([self.HUMAN_PROMPT, self.AI_PROMPT])
return stop
def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
r"""Call out to Anthropic's completion endpoint.
Args:
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
instruct_mode: Whether to emulate an instruction-following model.
Returns:
The string generated by the model.
@ -106,38 +129,29 @@ class Anthropic(LLM, BaseModel):
Example:
.. code-block:: python
response = anthropic("Tell me a joke.")
response = anthropic(
"\n\nHuman: Tell me a joke.\n\nAssistant:", instruct_mode=False
)
prompt = "What are the biggest risks facing humanity?"
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
response = model(prompt)
"""
if stop is None:
stop = []
if not self.HUMAN_PROMPT or not self.AI_PROMPT:
raise NameError("Please ensure the anthropic package is loaded")
# Never want model to invent new turns of Human / Assistant dialog.
stop.extend([self.HUMAN_PROMPT, self.AI_PROMPT])
if instruct_mode:
# Wrap the prompt so it emulates an instruction following model.
prompt = f"{self.HUMAN_PROMPT} prompt{self.AI_PROMPT} Sure, here you go:\n"
stop = self._get_anthropic_stop(stop)
response = self.client.completion(
model=self.model, prompt=prompt, stop_sequences=stop, **self._default_params
model=self.model,
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
**self._default_params,
)
text = response["completion"]
return text
def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator:
"""Call Anthropic completion_stream and return the resulting generator.
r"""Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Args:
prompt: The prompts to pass into the model.
prompt: The prompt to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
@ -146,10 +160,17 @@ class Anthropic(LLM, BaseModel):
Example:
.. code-block:: python
generator = anthropic.stream("Tell me a joke.")
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
"""
stop = self._get_anthropic_stop(stop)
return self.client.completion_stream(
model=self.model, prompt=prompt, stop_sequences=stop, **self._default_params
model=self.model,
prompt=self._wrap_prompt(prompt),
stop_sequences=stop,
**self._default_params,
)

View File

@ -0,0 +1,55 @@
"""PromptLayer wrapper."""
import datetime
from typing import List, Optional
from pydantic import BaseModel
from langchain.llms import OpenAI
from langchain.schema import LLMResult
class PromptLayerOpenAI(OpenAI, BaseModel):
"""Wrapper around OpenAI large language models.
To use, you should have the ``openai`` and ``promptlayer`` python
package installed, and the environment variable ``OPENAI_API_KEY``
and ``PROMPTLAYER_API_KEY`` set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds an extra
``pl_tags`` parameter that can be used to tag the request.
Example:
.. code-block:: python
from langchain import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
pl_tags: Optional[List[str]]
def _generate(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> LLMResult:
"""Call OpenAI generate and then call PromptLayer API to log the request."""
from promptlayer.utils import get_api_key, promptlayer_api_request
request_start_time = datetime.datetime.now().timestamp()
generated_responses = super()._generate(prompts, stop)
request_end_time = datetime.datetime.now().timestamp()
for i in range(len(prompts)):
prompt = prompts[i]
resp = generated_responses.generations[i]
promptlayer_api_request(
"langchain.PromptLayerOpenAI",
"langchain",
[prompt],
self._identifying_params,
self.pl_tags,
resp[0].text,
request_start_time,
request_end_time,
get_api_key(),
)
return generated_responses

24
poetry.lock generated
View File

@ -148,6 +148,24 @@ files = [
{file = "alabaster-0.7.13.tar.gz", hash = "sha256:a27a4a084d5e690e16e01e03ad2b2e552c61a65469419b907243193de1a84ae2"},
]
[[package]]
name = "anthropic"
version = "0.2.2"
description = "Library for accessing the anthropic API"
category = "main"
optional = true
python-versions = ">=3.8"
files = [
{file = "anthropic-0.2.2-py3-none-any.whl", hash = "sha256:383cdc6a8509b68b103586ce60c80d86557e90940ace2d12d7e4f193458e1e63"},
{file = "anthropic-0.2.2.tar.gz", hash = "sha256:3fbe61e37bd5f98f3d65ff3ee97bd64a6084f79a222ccb841dad8ff20c43b25e"},
]
[package.dependencies]
requests = "*"
[package.extras]
dev = ["black (>=22.3.0)", "pytest"]
[[package]]
name = "anyio"
version = "3.6.2"
@ -7021,10 +7039,10 @@ docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker
testing = ["flake8 (<5)", "func-timeout", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
[extras]
all = ["cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "elasticsearch", "google-search-results", "faiss-cpu", "sentence-transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx"]
llms = ["cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
all = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "elasticsearch", "google-search-results", "faiss-cpu", "sentence-transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx"]
llms = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "81fa8d3c24ead7311cc06a97fcabfe9a707fb3fc5989caa1569c5ef364cdd508"
content-hash = "690fdd08a207a73cb343cfdf25f7ae7d4177dc39b704d8655f3a4f26a881c2fc"

View File

@ -33,6 +33,7 @@ pinecone-client = {version = "^2", optional = true}
weaviate-client = {version = "^3", optional = true}
google-api-python-client = {version = "2.70.0", optional = true}
wolframalpha = {version = "5.0.0", optional = true}
anthropic = {version = "^0.2.2", optional = true}
qdrant-client = {version = "^0.11.7", optional = true}
dataclasses-json = "^0.5.7"
tensorflow-text = {version = "^2.11.0", optional = true, python = "^3.10, <3.12"}
@ -92,8 +93,8 @@ jupyter = "^1.0.0"
playwright = "^1.28.0"
[tool.poetry.extras]
llms = ["cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
all = ["cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "elasticsearch", "google-search-results", "faiss-cpu", "sentence_transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx"]
llms = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
all = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "elasticsearch", "google-search-results", "faiss-cpu", "sentence_transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx"]
[tool.isort]
profile = "black"

View File

@ -0,0 +1,76 @@
"""Test PromptLayer OpenAI API wrapper."""
from pathlib import Path
from typing import Generator
import pytest
from langchain.llms.loading import load_llm
from langchain.llms.promptlayer_openai import PromptLayerOpenAI
def test_promptlayer_openai_call() -> None:
"""Test valid call to promptlayer openai."""
llm = PromptLayerOpenAI(max_tokens=10)
output = llm("Say foo:")
assert isinstance(output, str)
def test_promptlayer_openai_extra_kwargs() -> None:
"""Test extra kwargs to promptlayer openai."""
# Check that foo is saved in extra_kwargs.
llm = PromptLayerOpenAI(foo=3, max_tokens=10)
assert llm.max_tokens == 10
assert llm.model_kwargs == {"foo": 3}
# Test that if extra_kwargs are provided, they are added to it.
llm = PromptLayerOpenAI(foo=3, model_kwargs={"bar": 2})
assert llm.model_kwargs == {"foo": 3, "bar": 2}
# Test that if provided twice it errors
with pytest.raises(ValueError):
PromptLayerOpenAI(foo=3, model_kwargs={"foo": 2})
def test_promptlayer_openai_stop_valid() -> None:
"""Test promptlayer openai stop logic on valid configuration."""
query = "write an ordered list of five items"
first_llm = PromptLayerOpenAI(stop="3", temperature=0)
first_output = first_llm(query)
second_llm = PromptLayerOpenAI(temperature=0)
second_output = second_llm(query, stop=["3"])
# Because it stops on new lines, shouldn't return anything
assert first_output == second_output
def test_promptlayer_openai_stop_error() -> None:
"""Test promptlayer openai stop logic on bad configuration."""
llm = PromptLayerOpenAI(stop="3", temperature=0)
with pytest.raises(ValueError):
llm("write an ordered list of five items", stop=["\n"])
def test_saving_loading_llm(tmp_path: Path) -> None:
"""Test saving/loading an promptlayer OpenAPI LLM."""
llm = PromptLayerOpenAI(max_tokens=10)
llm.save(file_path=tmp_path / "openai.yaml")
loaded_llm = load_llm(tmp_path / "openai.yaml")
assert loaded_llm == llm
def test_promptlayer_openai_streaming() -> None:
"""Test streaming tokens from promptalyer OpenAI."""
llm = PromptLayerOpenAI(max_tokens=10)
generator = llm.stream("I'm Pickle Rick")
assert isinstance(generator, Generator)
for token in generator:
assert isinstance(token["choices"][0]["text"], str)
def test_promptlayer_openai_streaming_error() -> None:
"""Test error handling in stream."""
llm = PromptLayerOpenAI(best_of=2)
with pytest.raises(ValueError):
llm.stream("I'm Pickle Rick")