Adding ability to `return_pl_id` to all PromptLayer Models in LangChain (#1699)

PromptLayer now has support for [several different tracking
features.](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9)
In order to use any of these features you need to have a request id
associated with the request.

In this PR we add a boolean argument called `return_pl_id` which will
add `pl_request_id` to the `generation_info` dictionary associated with
a generation.

We also updated the relevant documentation.
tool-patch
Jonathan Pedoeem 1 year ago committed by GitHub
parent f93c011456
commit 606605925d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -25,9 +25,25 @@ from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
To get the PromptLayer request id, use the argument `return_pl_id` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
```
This will add the PromptLayer request ID in the `generation_info` field of the `Generation` returned when using `.generate` or `.agenerate`
For example:
```python
llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
print("pl request id: ", res[0].generation_info["pl_request_id"])
```
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/chat/examples/promptlayer_chat_openai.ipynb)
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/chat/examples/promptlayer_chat_openai.ipynb) and `PromptLayerOpenAIChat`

@ -123,6 +123,40 @@
"id": "05e9e2fe",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c43803d1",
"metadata": {},
"source": [
"## Using PromptLayer Track\n",
"If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantializing the PromptLayer LLM to get the request id. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7d4db01",
"metadata": {},
"outputs": [],
"source": [
"chat = PromptLayerChatOpenAI(return_pl_id=True)\n",
"chat_results = chat.generate([[HumanMessage(content=\"I am a cat and I want\")]])\n",
"\n",
"for res in chat_results.generations:\n",
" pl_request_id = res[0].generation_info[\"pl_request_id\"]\n",
" promptlayer.track.score(request_id=pl_request_id, score=100)"
]
},
{
"cell_type": "markdown",
"id": "13e56507",
"metadata": {},
"source": [
"Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.\n",
"Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard."
]
}
],
"metadata": {
@ -141,11 +175,11 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
},
"vscode": {
"interpreter": {
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
"hash": "8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1"
}
}
},

@ -119,10 +119,39 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "05e9e2fe",
"metadata": {},
"source": []
"source": [
"## Using PromptLayer Track\n",
"If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantializing the PromptLayer LLM to get the request id. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a7315b9",
"metadata": {},
"outputs": [],
"source": [
"llm = PromptLayerOpenAI(return_pl_id=True)\n",
"llm_results = llm.generate([\"Tell me a joke\"])\n",
"\n",
"for res in llm_results.generations:\n",
" pl_request_id = res[0].generation_info[\"pl_request_id\"]\n",
" promptlayer.track.score(request_id=pl_request_id, score=100)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "7eb19139",
"metadata": {},
"source": [
"Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.\n",
"Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard."
]
}
],
"metadata": {
@ -145,7 +174,7 @@
},
"vscode": {
"interpreter": {
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
"hash": "8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1"
}
}
},

@ -17,8 +17,12 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI LLM adds an extra
``pl_tags`` parameter that can be used to tag the request.
be passed here. The PromptLayerChatOpenAI adds to optional
parameters:
``pl_tags``: List of strings to tag the request with.
``return_pl_id``: If True, the PromptLayer request ID will be
returned in the ``generation_info`` field of the
``Generation`` object.
Example:
.. code-block:: python
@ -28,6 +32,7 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
"""
pl_tags: Optional[List[str]]
return_pl_id: Optional[bool] = False
def _generate(
self, messages: List[BaseMessage], stop: Optional[List[str]] = None
@ -43,7 +48,7 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
response_dict, params = super()._create_message_dicts(
[generation.message], stop
)
promptlayer_api_request(
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerChatOpenAI",
"langchain",
message_dicts,
@ -53,7 +58,14 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses
async def _agenerate(
@ -70,7 +82,7 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
response_dict, params = super()._create_message_dicts(
[generation.message], stop
)
promptlayer_api_request(
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerChatOpenAI.async",
"langchain",
message_dicts,
@ -80,5 +92,12 @@ class PromptLayerChatOpenAI(ChatOpenAI, BaseModel):
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses

@ -17,8 +17,12 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerOpenAI LLM adds an extra
``pl_tags`` parameter that can be used to tag the request.
be passed here. The PromptLayerOpenAI LLM adds two optional
parameters:
``pl_tags``: List of strings to tag the request with.
``return_pl_id``: If True, the PromptLayer request ID will be
returned in the ``generation_info`` field of the
``Generation`` object.
Example:
.. code-block:: python
@ -28,6 +32,7 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
"""
pl_tags: Optional[List[str]]
return_pl_id: Optional[bool] = False
def _generate(
self, prompts: List[str], stop: Optional[List[str]] = None
@ -40,11 +45,12 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
request_end_time = datetime.datetime.now().timestamp()
for i in range(len(prompts)):
prompt = prompts[i]
generation = generated_responses.generations[i][0]
resp = {
"text": generated_responses.generations[i][0].text,
"text": generation.text,
"llm_output": generated_responses.llm_output,
}
promptlayer_api_request(
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerOpenAI",
"langchain",
[prompt],
@ -54,7 +60,14 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses
async def _agenerate(
@ -67,11 +80,12 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
request_end_time = datetime.datetime.now().timestamp()
for i in range(len(prompts)):
prompt = prompts[i]
generation = generated_responses.generations[i][0]
resp = {
"text": generated_responses.generations[i][0].text,
"text": generation.text,
"llm_output": generated_responses.llm_output,
}
promptlayer_api_request(
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerOpenAI.async",
"langchain",
[prompt],
@ -81,7 +95,14 @@ class PromptLayerOpenAI(OpenAI, BaseModel):
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses
@ -94,8 +115,12 @@ class PromptLayerOpenAIChat(OpenAIChat, BaseModel):
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat LLM adds an extra
``pl_tags`` parameter that can be used to tag the request.
be passed here. The PromptLayerOpenAIChat adds two optional
parameters:
``pl_tags``: List of strings to tag the request with.
``return_pl_id``: If True, the PromptLayer request ID will be
returned in the ``generation_info`` field of the
``Generation`` object.
Example:
.. code-block:: python
@ -105,6 +130,7 @@ class PromptLayerOpenAIChat(OpenAIChat, BaseModel):
"""
pl_tags: Optional[List[str]]
return_pl_id: Optional[bool] = False
def _generate(
self, prompts: List[str], stop: Optional[List[str]] = None
@ -117,11 +143,12 @@ class PromptLayerOpenAIChat(OpenAIChat, BaseModel):
request_end_time = datetime.datetime.now().timestamp()
for i in range(len(prompts)):
prompt = prompts[i]
generation = generated_responses.generations[i][0]
resp = {
"text": generated_responses.generations[i][0].text,
"text": generation.text,
"llm_output": generated_responses.llm_output,
}
promptlayer_api_request(
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerOpenAIChat",
"langchain",
[prompt],
@ -131,7 +158,14 @@ class PromptLayerOpenAIChat(OpenAIChat, BaseModel):
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses
async def _agenerate(
@ -144,16 +178,27 @@ class PromptLayerOpenAIChat(OpenAIChat, BaseModel):
request_end_time = datetime.datetime.now().timestamp()
for i in range(len(prompts)):
prompt = prompts[i]
resp = generated_responses.generations[i]
promptlayer_api_request(
generation = generated_responses.generations[i][0]
resp = {
"text": generation.text,
"llm_output": generated_responses.llm_output,
}
pl_request_id = promptlayer_api_request(
"langchain.PromptLayerOpenAIChat.async",
"langchain",
[prompt],
self._identifying_params,
self.pl_tags,
resp[0].text,
resp,
request_start_time,
request_end_time,
get_api_key(),
return_pl_id=self.return_pl_id,
)
if self.return_pl_id:
if generation.generation_info is None or not isinstance(
generation.generation_info, dict
):
generation.generation_info = {}
generation.generation_info["pl_request_id"] = pl_request_id
return generated_responses

Loading…
Cancel
Save