forked from Archives/langchain
e2d7677526
# Docs: compound ecosystem and integrations **Problem statement:** We have a big overlap between the References/Integrations and Ecosystem/LongChain Ecosystem pages. It confuses users. It creates a situation when new integration is added only on one of these pages, which creates even more confusion. - removed References/Integrations page (but move all its information into the individual integration pages - in the next PR). - renamed Ecosystem/LongChain Ecosystem into Integrations/Integrations. I like the Ecosystem term. It is more generic and semantically richer than the Integration term. But it mentally overloads users. The `integration` term is more concrete. UPDATE: after discussion, the Ecosystem is the term. Ecosystem/Integrations is the page (in place of Ecosystem/LongChain Ecosystem). As a result, a user gets a single place to start with the individual integration.
1.7 KiB
1.7 KiB
Prediction Guard
This page covers how to use the Prediction Guard ecosystem within LangChain. It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
Installation and Setup
- Install the Python SDK with
pip install predictionguard
- Get an Prediction Guard access token (as described here) and set it as an environment variable (
PREDICTIONGUARD_TOKEN
)
LLM Wrapper
There exists a Prediction Guard LLM wrapper, which you can access with
from langchain.llms import PredictionGuard
You can provide the name of your Prediction Guard "proxy" as an argument when initializing the LLM:
pgllm = PredictionGuard(name="your-text-gen-proxy")
Alternatively, you can use Prediction Guard's default proxy for SOTA LLMs:
pgllm = PredictionGuard(name="default-text-gen")
You can also provide your access token directly as an argument:
pgllm = PredictionGuard(name="default-text-gen", token="<your access token>")
Example usage
Basic usage of the LLM wrapper:
from langchain.llms import PredictionGuard
pgllm = PredictionGuard(name="default-text-gen")
pgllm("Tell me a joke")
Basic LLM Chaining with the Prediction Guard wrapper:
from langchain import PromptTemplate, LLMChain
from langchain.llms import PredictionGuard
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=PredictionGuard(name="default-text-gen"), verbose=True)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.predict(question=question)