{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# CerebriumAI LLM Example\n", "This notebook goes over how to use Langchain with [CerebriumAI](https://docs.cerebrium.ai/introduction)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Install cerebrium\n", "The `cerebrium` package is required to use the CerebriumAI API. Install `cerebrium` using `pip3 install cerebrium`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "$ pip3 install cerebrium" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "from langchain.llms import CerebriumAI\n", "from langchain import PromptTemplate, LLMChain" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Set the Environment API Key\n", "Make sure to get your API key from CerebriumAI. You are given a 1 hour free of serverless GPU compute to test different models." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "os.environ[\"CEREBRIUMAI_API_KEY\"] = \"YOUR_KEY_HERE\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create the CerebriumAI instance\n", "You can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "llm = CerebriumAI(endpoint_url=\"YOUR ENDPOINT URL HERE\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Create a Prompt Template\n", "We will create a prompt template for Question and Answer." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "template = \"\"\"Question: {question}\n", "\n", "Answer: Let's think step by step.\"\"\"\n", "\n", "prompt = PromptTemplate(template=template, input_variables=[\"question\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initiate the LLMChain" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "llm_chain = LLMChain(prompt=prompt, llm=llm)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Run the LLMChain\n", "Provide a question and run the LLMChain." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "question = \"What NFL team won the Super Bowl in the year Justin Beiber was born?\"\n", "\n", "llm_chain.run(question)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3.9.12 ('palm')", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9.12" }, "orig_nbformat": 4, "vscode": { "interpreter": { "hash": "a0a0263b650d907a3bfe41c0f8d6a63a071b884df3cfdc1579f00cdc1aed6b03" } } }, "nbformat": 4, "nbformat_minor": 2 }