{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "mesCTyhnJkNS" }, "source": [ "# Prediction Guard\n", "\n", ">[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "3RqWPav7AtKL" }, "outputs": [], "source": [ "! pip install predictionguard langchain" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2xe8JEUwA7_y" }, "outputs": [], "source": [ "import os\n", "\n", "import predictionguard as pg\n", "from langchain.llms import PredictionGuard\n", "from langchain import PromptTemplate, LLMChain" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "kp_Ymnx1SnDG" }, "outputs": [], "source": [ "# Optional, add your OpenAI API Key. This is optional, as Prediction Guard allows\n", "# you to access all the latest open access models (see https://docs.predictionguard.com)\n", "os.environ[\"OPENAI_API_KEY\"] = \"\"\n", "\n", "# Your Prediction Guard API key. Get one at predictionguard.com\n", "os.environ[\"PREDICTIONGUARD_TOKEN\"] = \"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ua7Mw1N4HcER" }, "outputs": [], "source": [ "pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Qo2p5flLHxrB" }, "outputs": [], "source": [ "pgllm(\"Tell me a joke\")" ] }, { "cell_type": "markdown", "metadata": { "id": "EyBYaP_xTMXH" }, "source": [ "# Control the output structure/ type of LLMs" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "55uxzhQSTPqF" }, "outputs": [], "source": [ "template = \"\"\"Respond to the following query based on the context.\n", "\n", "Context: EVERY comment, DM + email suggestion has led us to this EXCITING announcement! 🎉 We have officially added TWO new candle subscription box options! 📦\n", "Exclusive Candle Box - $80 \n", "Monthly Candle Box - $45 (NEW!)\n", "Scent of The Month Box - $28 (NEW!)\n", "Head to stories to get ALLL the deets on each box! 👆 BONUS: Save 50% on your first box with code 50OFF! 🎉\n", "\n", "Query: {query}\n", "\n", "Result: \"\"\"\n", "prompt = PromptTemplate(template=template, input_variables=[\"query\"])" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yersskWbTaxU" }, "outputs": [], "source": [ "# Without \"guarding\" or controlling the output of the LLM.\n", "pgllm(prompt.format(query=\"What kind of post is this?\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "PzxSbYwqTm2w" }, "outputs": [], "source": [ "# With \"guarding\" or controlling the output of the LLM. See the \n", "# Prediction Guard docs (https://docs.predictionguard.com) to learn how to \n", "# control the output with integer, float, boolean, JSON, and other types and\n", "# structures.\n", "pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\", \n", " output={\n", " \"type\": \"categorical\",\n", " \"categories\": [\n", " \"product announcement\", \n", " \"apology\", \n", " \"relational\"\n", " ]\n", " })\n", "pgllm(prompt.format(query=\"What kind of post is this?\"))" ] }, { "cell_type": "markdown", "metadata": { "id": "v3MzIUItJ8kV" }, "source": [ "# Chaining" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pPegEZExILrT" }, "outputs": [], "source": [ "pgllm = PredictionGuard(model=\"OpenAI-text-davinci-003\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "suxw62y-J-bg" }, "outputs": [], "source": [ "template = \"\"\"Question: {question}\n", "\n", "Answer: Let's think step by step.\"\"\"\n", "prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n", "llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n", "\n", "question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n", "\n", "llm_chain.predict(question=question)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "l2bc26KHKr7n" }, "outputs": [], "source": [ "template = \"\"\"Write a {adjective} poem about {subject}.\"\"\"\n", "prompt = PromptTemplate(template=template, input_variables=[\"adjective\", \"subject\"])\n", "llm_chain = LLMChain(prompt=prompt, llm=pgllm, verbose=True)\n", "\n", "llm_chain.predict(adjective=\"sad\", subject=\"ducks\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "I--eSa2PLGqq" }, "outputs": [], "source": [] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.6" } }, "nbformat": 4, "nbformat_minor": 4 }