{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Petals\n", "\n", "`Petals` runs 100B+ language models at home, BitTorrent-style.\n", "\n", "This notebook goes over how to use Langchain with [Petals](https://github.com/bigscience-workshop/petals)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install petals\n", "The `petals` package is required to use the Petals API. Install `petals` using `pip3 install petals`.\n", "\n", "For Apple Silicon(M1/M2) users please follow this guide [https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642](https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642) to install petals " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!pip3 install petals" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import os\n", "from langchain.llms import Petals\n", "from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set the Environment API Key\n", "Make sure to get [your API key](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) from Huggingface." ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " ········\n" ] } ], "source": [ "from getpass import getpass\n", "\n", "HUGGINGFACE_API_KEY = getpass()" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "os.environ[\"HUGGINGFACE_API_KEY\"] = HUGGINGFACE_API_KEY" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the Petals instance\n", "You can specify different parameters such as the model name, max new tokens, temperature, etc." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Downloading: 1%|▏ | 40.8M/7.19G [00:24<15:44, 7.57MB/s]" ] } ], "source": [ "# this can take several minutes to download big files!\n", "\n", "llm = Petals(model_name=\"bigscience/bloom-petals\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a Prompt Template\n", "We will create a prompt template for Question and Answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "template = \"\"\"Question: {question}\n", "\n", "Answer: Let's think step by step.\"\"\"\n", "\n", "prompt = PromptTemplate(template=template, input_variables=[\"question\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initiate the LLMChain" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "llm_chain = LLMChain(prompt=prompt, llm=llm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run the LLMChain\n", "Provide a question and run the LLMChain." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n", "\n", "llm_chain.run(question)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" }, "vscode": { "interpreter": { "hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03" } } }, "nbformat": 4, "nbformat_minor": 4 }