mirror of
https://github.com/hwchase17/langchain
synced 2024-11-18 09:25:54 +00:00
community[minor]: Prem Templates (#22783)
This PR adds the feature add Prem Template feature in ChatPremAI. Additionally it fixes a minor bug for API auth error when API passed through arguments.
This commit is contained in:
parent
4160b700e6
commit
c417803908
@ -238,6 +238,67 @@
|
||||
"> Ideally, you do not need to connect Repository IDs here to get Retrieval Augmented Generations. You can still get the same result if you have connected the repositories in prem platform. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Prem Templates\n",
|
||||
"\n",
|
||||
"Writing Prompt Templates can be super messy. Prompt templates are long, hard to manage, and must be continuously tweaked to improve and keep the same throughout the application. \n",
|
||||
"\n",
|
||||
"With **Prem**, writing and managing prompts can be super easy. The **_Templates_** tab inside the [launchpad](https://docs.premai.io/get-started/launchpad) helps you write as many prompts you need and use it inside the SDK to make your application running using those prompts. You can read more about Prompt Templates [here](https://docs.premai.io/get-started/prem-templates). \n",
|
||||
"\n",
|
||||
"To use Prem Templates natively with LangChain, you need to pass an id the `HumanMessage`. This id should be the name the variable of your prompt template. the `content` in `HumanMessage` should be the value of that variable. \n",
|
||||
"\n",
|
||||
"let's say for example, if your prompt template was this:\n",
|
||||
"\n",
|
||||
"```text\n",
|
||||
"Say hello to my name and say a feel-good quote\n",
|
||||
"from my age. My name is: {name} and age is {age}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"So now your human_messages should look like:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"human_messages = [\n",
|
||||
" HumanMessage(content=\"Shawn\", id=\"name\"),\n",
|
||||
" HumanMessage(content=\"22\", id=\"age\"),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"Pass this `human_messages` to ChatPremAI Client. Please note: Do not forget to\n",
|
||||
"pass the additional `template_id` to invoke generation with Prem Templates. If you are not aware of `template_id` you can learn more about that [in our docs](https://docs.premai.io/get-started/prem-templates). Here is an example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template_id = \"78069ce8-xxxxx-xxxxx-xxxx-xxx\"\n",
|
||||
"response = chat.invoke([human_message], template_id=template_id)\n",
|
||||
"print(response.content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Prem Template feature is available in streaming too. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
|
@ -113,29 +113,21 @@ The diameters of individual galaxies range from 80,000-150,000 light-years.
|
||||
{
|
||||
"document_chunks": [
|
||||
{
|
||||
"repository_id": 1991,
|
||||
"document_id": 1307,
|
||||
"chunk_id": 173926,
|
||||
"repository_id": 19xx,
|
||||
"document_id": 13xx,
|
||||
"chunk_id": 173xxx,
|
||||
"document_name": "Kegy 202 Chapter 2",
|
||||
"similarity_score": 0.586126983165741,
|
||||
"content": "n thousands\n of light-years. The diameters of individual\n galaxies range from 80,000-150,000 light\n "
|
||||
},
|
||||
{
|
||||
"repository_id": 1991,
|
||||
"document_id": 1307,
|
||||
"chunk_id": 173925,
|
||||
"repository_id": 19xx,
|
||||
"document_id": 13xx,
|
||||
"chunk_id": 173xxx,
|
||||
"document_name": "Kegy 202 Chapter 2",
|
||||
"similarity_score": 0.4815782308578491,
|
||||
"content": " for development of galaxies. A galaxy contains\n a large number of stars. Galaxies spread over\n vast distances that are measured in thousands\n "
|
||||
},
|
||||
{
|
||||
"repository_id": 1991,
|
||||
"document_id": 1307,
|
||||
"chunk_id": 173916,
|
||||
"document_name": "Kegy 202 Chapter 2",
|
||||
"similarity_score": 0.38112708926200867,
|
||||
"content": " was separated from the from each other as the balloon expands.\n solar surface. As the passing star moved away, Similarly, the distance between the galaxies is\n the material separated from the solar surface\n continued to revolve around the sun and it\n slowly condensed into planets. Sir James Jeans\n and later Sir Harold Jeffrey supported thisnot to be republishedalso found to be increasing and thereby, the\n universe is"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
@ -173,7 +165,42 @@ This will stream tokens one after the other.
|
||||
|
||||
> Please note: As of now, RAG with streaming is not supported. However we still support it with our API. You can learn more about that [here](https://docs.premai.io/get-started/chat-completion-sse).
|
||||
|
||||
## PremEmbeddings
|
||||
|
||||
## Prem Templates
|
||||
|
||||
Writing Prompt Templates can be super messy. Prompt templates are long, hard to manage, and must be continuously tweaked to improve and keep the same throughout the application.
|
||||
|
||||
With **Prem**, writing and managing prompts can be super easy. The **_Templates_** tab inside the [launchpad](https://docs.premai.io/get-started/launchpad) helps you write as many prompts you need and use it inside the SDK to make your application running using those prompts. You can read more about Prompt Templates [here](https://docs.premai.io/get-started/prem-templates).
|
||||
|
||||
To use Prem Templates natively with LangChain, you need to pass an id the `HumanMessage`. This id should be the name the variable of your prompt template. the `content` in `HumanMessage` should be the value of that variable.
|
||||
|
||||
let's say for example, if your prompt template was this:
|
||||
|
||||
```text
|
||||
Say hello to my name and say a feel-good quote
|
||||
from my age. My name is: {name} and age is {age}
|
||||
```
|
||||
|
||||
So now your human_messages should look like:
|
||||
|
||||
```python
|
||||
human_messages = [
|
||||
HumanMessage(content="Shawn", id="name"),
|
||||
HumanMessage(content="22", id="age")
|
||||
]
|
||||
```
|
||||
|
||||
Pass this `human_messages` to ChatPremAI Client. Please note: Do not forget to
|
||||
pass the additional `template_id` to invoke generation with Prem Templates. If you are not aware of `template_id` you can learn more about that [in our docs](https://docs.premai.io/get-started/prem-templates). Here is an example:
|
||||
|
||||
```python
|
||||
template_id = "78069ce8-xxxxx-xxxxx-xxxx-xxx"
|
||||
response = chat.invoke([human_message], template_id=template_id)
|
||||
```
|
||||
|
||||
Prem Templates are also available for Streaming too.
|
||||
|
||||
## Prem Embeddings
|
||||
|
||||
In this section we are going to dicuss how we can get access to different embedding model using `PremEmbeddings` with LangChain. Lets start by importing our modules and setting our API Key.
|
||||
|
||||
|
@ -149,26 +149,49 @@ def _convert_delta_response_to_message_chunk(
|
||||
|
||||
def _messages_to_prompt_dict(
|
||||
input_messages: List[BaseMessage],
|
||||
) -> Tuple[Optional[str], List[Dict[str, str]]]:
|
||||
template_id: Optional[str] = None,
|
||||
) -> Tuple[Optional[str], List[Dict[str, Any]]]:
|
||||
"""Converts a list of LangChain Messages into a simple dict
|
||||
which is the message structure in Prem"""
|
||||
|
||||
system_prompt: Optional[str] = None
|
||||
examples_and_messages: List[Dict[str, str]] = []
|
||||
examples_and_messages: List[Dict[str, Any]] = []
|
||||
|
||||
for input_msg in input_messages:
|
||||
if isinstance(input_msg, SystemMessage):
|
||||
system_prompt = str(input_msg.content)
|
||||
elif isinstance(input_msg, HumanMessage):
|
||||
examples_and_messages.append(
|
||||
{"role": "user", "content": str(input_msg.content)}
|
||||
)
|
||||
elif isinstance(input_msg, AIMessage):
|
||||
examples_and_messages.append(
|
||||
{"role": "assistant", "content": str(input_msg.content)}
|
||||
)
|
||||
else:
|
||||
raise ChatPremAPIError("No such role explicitly exists")
|
||||
if template_id is not None:
|
||||
params: Dict[str, str] = {}
|
||||
for input_msg in input_messages:
|
||||
if isinstance(input_msg, SystemMessage):
|
||||
system_prompt = str(input_msg.content)
|
||||
else:
|
||||
assert (input_msg.id is not None) and (input_msg.id != ""), ValueError(
|
||||
"When using prompt template there should be id associated ",
|
||||
"with each HumanMessage",
|
||||
)
|
||||
params[str(input_msg.id)] = str(input_msg.content)
|
||||
|
||||
examples_and_messages.append(
|
||||
{"role": "user", "template_id": template_id, "params": params}
|
||||
)
|
||||
|
||||
for input_msg in input_messages:
|
||||
if isinstance(input_msg, AIMessage):
|
||||
examples_and_messages.append(
|
||||
{"role": "assistant", "content": str(input_msg.content)}
|
||||
)
|
||||
else:
|
||||
for input_msg in input_messages:
|
||||
if isinstance(input_msg, SystemMessage):
|
||||
system_prompt = str(input_msg.content)
|
||||
elif isinstance(input_msg, HumanMessage):
|
||||
examples_and_messages.append(
|
||||
{"role": "user", "content": str(input_msg.content)}
|
||||
)
|
||||
elif isinstance(input_msg, AIMessage):
|
||||
examples_and_messages.append(
|
||||
{"role": "assistant", "content": str(input_msg.content)}
|
||||
)
|
||||
else:
|
||||
raise ChatPremAPIError("No such role explicitly exists")
|
||||
return system_prompt, examples_and_messages
|
||||
|
||||
|
||||
@ -238,10 +261,14 @@ class ChatPremAI(BaseChatModel, BaseModel):
|
||||
) from error
|
||||
|
||||
try:
|
||||
premai_api_key = get_from_dict_or_env(
|
||||
premai_api_key: Union[str, SecretStr] = get_from_dict_or_env(
|
||||
values, "premai_api_key", "PREMAI_API_KEY"
|
||||
)
|
||||
values["client"] = Prem(api_key=premai_api_key)
|
||||
values["client"] = Prem(
|
||||
api_key=premai_api_key
|
||||
if isinstance(premai_api_key, str)
|
||||
else premai_api_key._secret_value
|
||||
)
|
||||
except Exception as error:
|
||||
raise ValueError("Your API Key is incorrect. Please try again.") from error
|
||||
return values
|
||||
@ -293,7 +320,12 @@ class ChatPremAI(BaseChatModel, BaseModel):
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(messages) # type: ignore
|
||||
if "template_id" in kwargs:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(
|
||||
messages, template_id=kwargs["template_id"]
|
||||
)
|
||||
else:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(messages) # type: ignore
|
||||
|
||||
if system_prompt is not None and system_prompt != "":
|
||||
kwargs["system_prompt"] = system_prompt
|
||||
@ -317,7 +349,12 @@ class ChatPremAI(BaseChatModel, BaseModel):
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(messages)
|
||||
if "template_id" in kwargs:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(
|
||||
messages, template_id=kwargs["template_id"]
|
||||
) # type: ignore
|
||||
else:
|
||||
system_prompt, messages_to_pass = _messages_to_prompt_dict(messages) # type: ignore
|
||||
|
||||
if stop is not None:
|
||||
logger.warning("stop is not supported in langchain streaming")
|
||||
|
Loading…
Reference in New Issue
Block a user