2023-04-25 05:27:22 +00:00
# Prediction Guard
2023-06-05 23:08:55 +00:00
>[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments.
2023-04-25 05:27:22 +00:00
## Installation and Setup
2023-06-05 23:08:55 +00:00
- Install the Python SDK:
```bash
pip install predictionguard
```
2023-04-25 05:27:22 +00:00
- Get an Prediction Guard access token (as described [here ](https://docs.predictionguard.com/ )) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
2023-06-05 23:08:55 +00:00
## LLM
2023-04-25 05:27:22 +00:00
```python
from langchain.llms import PredictionGuard
```
2023-06-05 23:08:55 +00:00
### Example
2023-05-29 14:14:59 +00:00
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
2023-04-25 05:27:22 +00:00
```python
2023-05-29 14:14:59 +00:00
pgllm = PredictionGuard(model="MPT-7B-Instruct")
2023-04-25 05:27:22 +00:00
```
2023-05-29 14:14:59 +00:00
You can also provide your access token directly as an argument:
2023-04-25 05:27:22 +00:00
```python
2023-05-29 14:14:59 +00:00
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="< your access token > ")
2023-04-25 05:27:22 +00:00
```
2023-06-05 23:08:55 +00:00
Also, you can provide an "output" argument that is used to structure/ control the output of the LLM:
2023-04-25 05:27:22 +00:00
```python
2023-05-29 14:14:59 +00:00
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
2023-04-25 05:27:22 +00:00
```
2023-06-05 23:08:55 +00:00
#### Basic usage of the controlled or guarded LLM:
2023-04-25 05:27:22 +00:00
```python
2023-05-29 14:14:59 +00:00
import os
import predictionguard as pg
2023-04-25 05:27:22 +00:00
from langchain.llms import PredictionGuard
2023-05-29 14:14:59 +00:00
from langchain import PromptTemplate, LLMChain
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "< your Prediction Guard access token > "
# Define a prompt template
template = """Respond to the following query based on the context.
Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦
Exclusive Candle Box - $80
Monthly Candle Box - $45 (NEW!)
Scent of The Month Box - $28 (NEW!)
Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉
2023-04-25 05:27:22 +00:00
2023-05-29 14:14:59 +00:00
Query: {query}
Result: """
prompt = PromptTemplate(template=template, input_variables=["query"])
# With "guarding" or controlling the output of the LLM. See the
# Prediction Guard docs (https://docs.predictionguard.com) to learn how to
# control the output with integer, float, boolean, JSON, and other types and
# structures.
pgllm = PredictionGuard(model="MPT-7B-Instruct",
output={
"type": "categorical",
"categories": [
"product announcement",
"apology",
"relational"
]
})
pgllm(prompt.format(query="What kind of post is this?"))
2023-04-25 05:27:22 +00:00
```
2023-06-05 23:08:55 +00:00
#### Basic LLM Chaining with the Prediction Guard:
2023-04-25 05:27:22 +00:00
```python
2023-05-29 14:14:59 +00:00
import os
2023-04-25 05:27:22 +00:00
from langchain import PromptTemplate, LLMChain
from langchain.llms import PredictionGuard
2023-05-29 14:14:59 +00:00
# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows
# you to access all the latest open access models (see https://docs.predictionguard.com)
os.environ["OPENAI_API_KEY"] = "< your OpenAI api key > "
# Your Prediction Guard API key. Get one at predictionguard.com
os.environ["PREDICTIONGUARD_TOKEN"] = "< your Prediction Guard access token > "
pgllm = PredictionGuard(model="OpenAI-text-davinci-003")
2023-04-25 05:27:22 +00:00
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
2023-05-29 14:14:59 +00:00
llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)
2023-04-25 05:27:22 +00:00
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)
```