2022-11-07 09:55:00 +00:00
{
"cells": [
{
"cell_type": "markdown",
"id": "a07e0f5e",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "a07e0f5e"
},
2022-11-07 09:55:00 +00:00
"source": [
"<div>\n",
"<img src=\"https://camo.githubusercontent.com/473dd9f992924d27457650251786464f72e54121ac6e9210add0f483ca849277/68747470733a2f2f692e696d6775722e636f6d2f3765523750616e2e706e67\" width=\"40%\"> \n",
"</div>\n",
"\n",
2023-07-12 12:50:54 +00:00
"# Distributed LLaMA for Text Classification using Prompt Tuning\n",
2022-11-07 09:55:00 +00:00
"\n",
2023-07-12 12:50:54 +00:00
"In this example, we show how to use [prompt tuning](https://aclanthology.org/2021.emnlp-main.243.pdf) to adapt the [LLaMA](https://github.com/facebookresearch/llama) model for a specific downstream task. We will run this model in a decentralized fashion using [Petals](https://github.com/bigscience-workshop/petals). Petals servers will maintain the LLaMA blocks (they are kept unchanged during adaptation), and the gradient descent will learn a few prefix tokens stored on a Petals client.\n",
2022-11-07 09:55:00 +00:00
"\n",
2023-07-12 12:50:54 +00:00
"We will adapt LLaMA for the classification task using the [SST-2 dataset](https://nlp.stanford.edu/sentiment/). This dataset is a binary classification task, where the goal is to predict whether a sentence is positive or negative. The SST-2 dataset is a subset of the Stanford Sentiment Treebank, and it is available in the [Hugging Face Datasets](https://huggingface.co/datasets) library.\n",
2022-11-07 09:55:00 +00:00
"\n",
2022-12-03 10:09:21 +00:00
"To use this notebook in Colab:\n",
"\n",
"1. Follow this link: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb)\n",
"2. Go to **Runtime** -> **Change runtime type** and select the GPU accelerator."
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "markdown",
"id": "a3f8526f",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "a3f8526f"
},
2022-11-07 09:55:00 +00:00
"source": [
"First, we have to prepare all dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73bbc648",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "73bbc648"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
2023-07-12 12:50:54 +00:00
"%pip install -q datasets wandb scikit-learn\n",
"%pip install -q git+https://github.com/bigscience-workshop/petals@main"
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4ab6ca7",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "b4ab6ca7"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"import os\n",
2022-12-15 05:28:25 +00:00
"\n",
2022-12-16 05:09:06 +00:00
"import torch\n",
2022-12-15 05:28:25 +00:00
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
2022-12-16 05:09:06 +00:00
"import transformers\n",
"import wandb\n",
2022-11-07 09:55:00 +00:00
"from datasets import load_dataset, load_metric\n",
"from tqdm import tqdm\n",
"from torch.optim import AdamW\n",
"from torch.utils.data import DataLoader\n",
2023-07-12 12:50:54 +00:00
"from transformers import LlamaTokenizer, get_scheduler, set_seed\n",
2022-11-07 09:55:00 +00:00
"\n",
2023-07-12 12:50:54 +00:00
"from petals import DistributedLlamaForSequenceClassification\n",
"\n",
"set_seed(0)"
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "markdown",
"id": "1bf07b5d",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "1bf07b5d"
},
2022-11-07 09:55:00 +00:00
"source": [
"Let's set some hyperparameters for training:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f04ba4d2",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "f04ba4d2"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
2023-07-12 12:50:54 +00:00
"MODEL_NAME = \"enoch/llama-65b-hf\"\n",
2023-01-24 06:01:31 +00:00
"\n",
"# Choose a prompt-tuning mode ('ptune' or 'deep_ptune').\n",
"# The latter fine-tunes separate prefixes for each transformer block,\n",
"# so prompt-tuning will take more time but yield better results.\n",
"# See this paper for details of how it works: https://arxiv.org/pdf/2110.07602.pdf\n",
"TUNING_MODE = 'ptune'\n",
"\n",
2023-07-12 12:50:54 +00:00
"NUM_PREFIX_TOKENS = 8\n",
2022-12-16 15:42:18 +00:00
"DEVICE = 'cuda'\n",
2023-07-12 12:50:54 +00:00
"BATCH_SIZE = 32\n",
2022-11-07 09:55:00 +00:00
"LR = 1e-2\n",
"WEIGHT_DECAY = 0.0\n",
"NUM_EPOCHS = 3\n",
"SEED = 42\n",
2023-01-24 06:01:31 +00:00
"MODEL_MAX_LENGTH = 64"
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "markdown",
"id": "d38316bd",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "d38316bd"
},
2022-11-07 09:55:00 +00:00
"source": [
2023-07-12 12:50:54 +00:00
"Here, we prepare tokenizer and distributed model and connect it to the public swarm."
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "03c6e53e",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "03c6e53e"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
2023-07-12 12:50:54 +00:00
"tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME)\n",
2022-11-07 09:55:00 +00:00
"tokenizer.padding_side = 'right'\n",
"tokenizer.model_max_length = MODEL_MAX_LENGTH\n",
2023-07-12 12:50:54 +00:00
"tokenizer.pad_token = tokenizer.unk_token\n",
"model = DistributedLlamaForSequenceClassification.from_pretrained(\n",
2022-12-03 10:09:21 +00:00
" MODEL_NAME,\n",
" pre_seq_len=NUM_PREFIX_TOKENS,\n",
2022-11-07 09:55:00 +00:00
" tuning_mode=TUNING_MODE\n",
2023-07-12 12:50:54 +00:00
").float().to(DEVICE)\n",
"model.config.pad_token_id = tokenizer.pad_token_id"
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "markdown",
"id": "042e3786",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "042e3786"
},
2022-11-07 09:55:00 +00:00
"source": [
"Let's prepare the SST-2 dataset. We need just one preprocessing function to tokenize the dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c44d516",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "9c44d516"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"task = 'sst2'\n",
"\n",
"dataset = load_dataset(\"glue\", task)\n",
"\n",
"def preprocess_function(examples):\n",
2023-07-12 12:50:54 +00:00
" return tokenizer(examples[\"sentence\"], padding='max_length', truncation=True, return_token_type_ids=False)\n",
2022-11-07 09:55:00 +00:00
"\n",
"tokenized_datasets = dataset.map(preprocess_function, batched=True)\n",
"tokenized_datasets = tokenized_datasets.remove_columns([\"sentence\", \"idx\", \"attention_mask\"])\n",
"tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\n",
"tokenized_datasets.set_format(\"torch\")\n",
"\n",
"train_dataset = tokenized_datasets[\"train\"].shuffle(seed=SEED)\n",
"valid_dataset = tokenized_datasets[\"validation\"].shuffle(seed=SEED)\n",
"\n",
"train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)\n",
"valid_dataloader = DataLoader(valid_dataset, batch_size=BATCH_SIZE)"
]
},
{
"cell_type": "markdown",
"id": "2a3f3590",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "2a3f3590"
},
2022-11-07 09:55:00 +00:00
"source": [
2023-07-12 12:50:54 +00:00
"To monitor training, we need the metric function. For SST-2, the target metric is accuracy. We will load it from the datasets library."
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e1812be",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "1e1812be"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"metric = load_metric('glue', task)\n",
"\n",
"def eval_metrics(model, dataloader, device='cpu'):\n",
" model.eval()\n",
" for batch in dataloader:\n",
" batch = {k: v.to(device) for k, v in batch.items()}\n",
2023-07-12 12:50:54 +00:00
"\n",
2022-11-07 09:55:00 +00:00
" with torch.no_grad():\n",
" outputs = model(**batch)\n",
"\n",
" logits = outputs.logits\n",
" predictions = torch.argmax(logits, dim=-1)\n",
" metric.add_batch(predictions=predictions, references=batch[\"labels\"])\n",
" model.train()\n",
" return metric.compute()"
]
},
{
"cell_type": "markdown",
"id": "ef4323fd",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "ef4323fd"
},
2022-11-07 09:55:00 +00:00
"source": [
2023-07-12 12:50:54 +00:00
"Before setting up optimizers, let's check the model parameters that will be trained."
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9cc0ba34",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "9cc0ba34"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"for n, p in model.named_parameters():\n",
" if p.requires_grad:\n",
" print(n, p.requires_grad, p.device)"
]
},
{
"cell_type": "markdown",
"id": "59cffce7",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "59cffce7"
},
2022-11-07 09:55:00 +00:00
"source": [
2023-07-12 12:50:54 +00:00
"The optimizer will only work on **prompts and classifier head**: they are only trainable parameters. Let's initialize the optimizer and the learning rate scheduler."
2022-11-07 09:55:00 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef9bf344",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "ef9bf344"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"optimizer = AdamW(model.parameters(), lr=LR, weight_decay=WEIGHT_DECAY)\n",
"\n",
"lr_scheduler = get_scheduler(\n",
2023-07-12 12:50:54 +00:00
" name=\"linear\", optimizer=optimizer, num_warmup_steps=0, num_training_steps=len(train_dataloader) * NUM_EPOCHS\n",
2022-11-07 09:55:00 +00:00
")"
]
},
{
"cell_type": "markdown",
"id": "423c56d5",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "423c56d5"
},
2022-11-07 09:55:00 +00:00
"source": [
"Let's initialize wandb for logging and start the training loop!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9e46807",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "d9e46807"
},
2022-11-07 09:55:00 +00:00
"outputs": [],
"source": [
"wandb.init(\n",
" project=\"bloom-sst-2\",\n",
" config={\n",
" \"num_epochs\": NUM_EPOCHS,\n",
" \"batch_size\": BATCH_SIZE,\n",
" \"learning_rate\": LR,\n",
" \"weight_decay\": WEIGHT_DECAY,\n",
" \"num_prefix_tokens\": NUM_PREFIX_TOKENS,\n",
" \"model_name\": MODEL_NAME,\n",
" \"seed\": SEED,\n",
" }\n",
")\n",
"\n",
2023-07-12 12:50:54 +00:00
"scaler = torch.cuda.amp.GradScaler()\n",
"\n",
2022-11-07 09:55:00 +00:00
"for epoch in range(NUM_EPOCHS):\n",
2023-07-12 12:50:54 +00:00
" model.train()\n",
2022-11-07 09:55:00 +00:00
" for batch in tqdm(train_dataloader):\n",
" batch = {k: v.to(DEVICE) for k, v in batch.items()}\n",
"\n",
2023-07-12 12:50:54 +00:00
" with torch.autocast(device_type=DEVICE, dtype=torch.float16):\n",
" outputs = model(**batch)\n",
2022-11-07 09:55:00 +00:00
" loss = outputs.loss\n",
2023-07-12 12:50:54 +00:00
" scaler.scale(loss).backward()\n",
2022-11-07 09:55:00 +00:00
"\n",
2023-07-12 12:50:54 +00:00
" scaler.step(optimizer)\n",
" scaler.update()\n",
2022-11-07 09:55:00 +00:00
" lr_scheduler.step()\n",
" optimizer.zero_grad()\n",
"\n",
2023-07-12 12:50:54 +00:00
" wandb.log({\"Train Loss\": loss.detach()})\n",
2022-11-07 09:55:00 +00:00
"\n",
" accuracy = eval_metrics(model, valid_dataloader, device=DEVICE)\n",
" wandb.log({\"Valid Accuracy\": accuracy}, commit=False)"
]
},
{
"cell_type": "markdown",
"id": "51770911",
2023-07-12 12:50:54 +00:00
"metadata": {
"id": "51770911"
},
2022-12-15 05:28:25 +00:00
"source": [
2023-07-20 16:59:28 +00:00
"Our model has been trained! You can now upload it to the Hub for later use, try out different models [served in the public swarm](https://health.petals.dev/), or [join Petals with your own GPU](https://github.com/bigscience-workshop/petals#connect-your-gpu-and-increase-petals-capacity)!"
2022-12-15 05:28:25 +00:00
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
2023-07-12 12:50:54 +00:00
"source": [],
"metadata": {
"collapsed": false
}
2022-11-07 09:55:00 +00:00
}
],
"metadata": {
"kernelspec": {
2023-02-01 18:32:27 +00:00
"display_name": "Python 3",
2022-11-07 09:55:00 +00:00
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
2023-02-01 18:32:27 +00:00
"version": "3.8.8"
2022-11-07 09:55:00 +00:00
},
"vscode": {
"interpreter": {
2022-12-15 05:28:25 +00:00
"hash": "31f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6"
2022-11-07 09:55:00 +00:00
}
2023-07-12 12:50:54 +00:00
},
"colab": {
"provenance": [],
"gpuType": "T4"
},
"accelerator": "GPU"
2022-11-07 09:55:00 +00:00
},
"nbformat": 4,
"nbformat_minor": 5
}