Prompt-tuning notebooks: suggest to use a smaller model for faster prototyping (#234)

pull/244/head
Alexander Borzunov 1 year ago committed by GitHub
parent d4c687daca
commit 5d7395e1b5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -61,8 +61,8 @@ You can also host [BLOOMZ](https://huggingface.co/bigscience/bloomz), a version
Basic tutorials:
- Getting started: [tutorial](https://colab.research.google.com/drive/1Ervk6HPNS6AYVr3xVdQnY5a-TjjmLCdQ?usp=sharing)
- Fine-tune BLOOM to be a personified chatbot: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb)
- Fine-tune BLOOM for text semantic classification: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb)
- Prompt-tune BLOOM to create a personified chatbot: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb)
- Prompt-tune BLOOM for text semantic classification: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb)
Example apps built with Petals:

@ -75,7 +75,18 @@
"metadata": {},
"outputs": [],
"source": [
"MODEL_NAME = \"bigscience/bloom-petals\" # select model you like\n",
"# Choose a model you'd like to prompt-tune. We recommend starting with\n",
"# the smaller 7.1B version of BLOOM (bigscience/bloom-7b1-petals) for faster prototyping.\n",
"# Once your code is ready, you can switch to full-scale\n",
"# 176B-parameter BLOOM (bigscience/bloom-petals) or BLOOMZ (bigscience/bloomz-petals).\n",
"MODEL_NAME = \"bigscience/bloom-7b1-petals\"\n",
"\n",
"# Choose a prompt-tuning mode ('ptune' or 'deep_ptune').\n",
"# The latter fine-tunes separate prefixes for each transformer block,\n",
"# so prompt-tuning will take more time but yield better results.\n",
"# See this paper for details of how it works: https://arxiv.org/pdf/2110.07602.pdf\n",
"TUNING_MODE = 'ptune'\n",
"\n",
"NUM_PREFIX_TOKENS = 16\n",
"DEVICE = 'cuda'\n",
"BATCH_SIZE = 8\n",
@ -83,8 +94,7 @@
"WEIGHT_DECAY = 0.0\n",
"NUM_SAMPLES = 1000\n",
"SEED = 42\n",
"MODEL_MAX_LENGTH = 256\n",
"TUNING_MODE = 'ptune' # choose between ['ptune', 'deep_ptune'] "
"MODEL_MAX_LENGTH = 256"
]
},
{

@ -77,7 +77,18 @@
"metadata": {},
"outputs": [],
"source": [
"MODEL_NAME = \"bigscience/bloom-petals\" # select model you like\n",
"# Choose a model you'd like to prompt-tune. We recommend starting with\n",
"# the smaller 7.1B version of BLOOM (bigscience/bloom-7b1-petals) for faster prototyping.\n",
"# Once your code is ready, you can switch to full-scale\n",
"# 176B-parameter BLOOM (bigscience/bloom-petals) or BLOOMZ (bigscience/bloomz-petals).\n",
"MODEL_NAME = \"bigscience/bloom-7b1-petals\"\n",
"\n",
"# Choose a prompt-tuning mode ('ptune' or 'deep_ptune').\n",
"# The latter fine-tunes separate prefixes for each transformer block,\n",
"# so prompt-tuning will take more time but yield better results.\n",
"# See this paper for details of how it works: https://arxiv.org/pdf/2110.07602.pdf\n",
"TUNING_MODE = 'ptune'\n",
"\n",
"NUM_PREFIX_TOKENS = 16\n",
"DEVICE = 'cuda'\n",
"BATCH_SIZE = 16\n",
@ -85,8 +96,7 @@
"WEIGHT_DECAY = 0.0\n",
"NUM_EPOCHS = 3\n",
"SEED = 42\n",
"MODEL_MAX_LENGTH = 64\n",
"TUNING_MODE = 'ptune' # choose between ['ptune', 'deep_ptune'] "
"MODEL_MAX_LENGTH = 64"
]
},
{

Loading…
Cancel
Save