You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Go to file
Max Ryabinin 34644f13e1
Downgrade CUDA in Docker image to 11.0.3 (#145)
* Downgrade CUDA in Docker image to 11.0.3

* Remove development deps from the image
1 year ago
.github/workflows Fix arguments in remove_old_models.py (#153) 1 year ago
examples Update advanced notebooks (#148) 1 year ago
src/petals Fix logging: do not duplicate lines, enable colors in Colab (#156) 1 year ago
tests Fix logging: do not duplicate lines, enable colors in Colab (#156) 1 year ago
.gitignore install script 2 years ago
Dockerfile Downgrade CUDA in Docker image to 11.0.3 (#145) 1 year ago
LICENSE Add MIT license 1 year ago
README.md Update README.md 1 year ago
pyproject.toml Make Petals a pip-installable package (attempt 2) (#102) 1 year ago
setup.cfg Bump transformers to 4.25.1 (#151) 1 year ago

README.md


Easy way to run 100B+ language models without high-end GPUs
by joining compute resources with people across the Internet.
Up to 10x faster than offloading

Generate text using distributed BLOOM and fine-tune it for your own tasks:

from petals.client import DistributedBloomForCausalLM

# Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet
model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom-petals", tuning_mode="ptune")

inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(remote_outputs[0]))  # A cat sat on a mat...

# Training (updates only prompts or adapters hosted locally)
optimizer = torch.optim.AdamW(model.parameters())
for input_ids, labels in data_loader:
    outputs = model.forward(input_ids)
    loss = cross_entropy(outputs.logits, labels)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

🚀  Try now in Colab

Connect your own GPU and increase Petals capacity:

# In an Anaconda env
(conda) $ conda install pytorch cudatoolkit=11.3 -c pytorch
(conda) $ pip install git+https://github.com/bigscience-workshop/petals
(conda) $ python -m petals.cli.run_server bigscience/bloom-petals

# Or using a GPU-enabled Docker image
sudo docker run --net host --ipc host --gpus all --volume petals-cache:/cache --rm learningathome/petals:main \
    python -m petals.cli.run_server bigscience/bloom-petals

💬 If you have any issues or feedback, please join our Discord server!

Check out more tutorials:

  • Training a personified chatbot: notebook
  • Fine-tuning BLOOM for text semantic classification: notebook
  • Launching your own swarm: tutorial
  • Running a custom foundation model: tutorial

How it works?

  • Petals runs inference or fine-tunes large language models like BLOOM-176B by joining compute resources with people all over the Internet.
  • One participant with weak GPU can load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
  • Inference runs at ≈ 1 sec per step (token) — 10x faster than possible with offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec.
  • Beyond classic language model APIs — you can employ any fine-tuning and sampling methods by executing custom paths through the model or accessing its hidden states. This combines the comforts of an API with the flexibility of PyTorch.

📜  Read paper

📋 Model's terms of use

Before building your own application that runs a language model with Petals, please make sure that you are familiar with the model's terms of use, risks, and limitations. In case of BLOOM, they are described in its model card and license.

🔒 Privacy and security

If you work with sensitive data, do not use the public swarm. This is important because it's technically possible for peers serving model layers to recover input data and model outputs, or modify the outputs in a malicious way. Instead, you can set up a private Petals swarm hosted by people and organization you trust, who are authorized to process this data. We discuss privacy and security in more detail here.

FAQ

  1. What's the motivation for people to host model layers in the public swarm?

    People who run inference and fine-tuning themselves get a certain speedup if they host a part of the model locally. Some may be also motivated to "give back" to the community helping them to run the model (similarly to how BitTorrent users help others by sharing data they have already downloaded).

    Since it may be not enough for everyone, we are also working on introducing explicit incentives ("bloom points") for people donating their GPU time to the public swarm. Once this system is ready, people who earned these points will be able to spend them on inference/fine-tuning with higher priority or increased security guarantees, or (maybe) exchange them for other rewards.

  2. Why is the platform named "Petals"?

    "Petals" is a metaphor for people serving different parts of the model. Together, they host the entire language model — BLOOM.

    While our platform focuses on BLOOM now, we aim to support more foundation models in future.

Installation

Here's how to install Petals with conda:

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install git+https://github.com/bigscience-workshop/petals

This script uses Anaconda to install cuda-enabled PyTorch. If you don't have anaconda, you can get it from here. If you don't want anaconda, you can install PyTorch any other way. If you want to run models with 8-bit weights, please install PyTorch with CUDA 11 or newer for compatility with bitsandbytes.

System requirements: Petals only supports Linux for now. If you don't have a Linux machine, consider running Petals in Docker (see our image) or, in case of Windows, in WSL2 (read more). CPU is enough to run a client, but you probably need a GPU to run a server efficiently.

🛠️ Development

Petals uses pytest with a few plugins. To install them, run:

git clone https://github.com/bigscience-workshop/petals.git && cd petals
pip install -e .[dev]

To run minimalistic tests, you need to make a local swarm with a small model and some servers. You may find more information about how local swarms work and how to run them in this tutorial.

export MODEL_NAME=bloom-testing/test-bloomd-560m-main

python -m petals.cli.run_server $MODEL_NAME --block_indices 0:12 \
  --identity tests/test.id --host_maddrs /ip4/127.0.0.1/tcp/31337 --new_swarm  &> server1.log &
sleep 5  # wait for the first server to initialize DHT

python -m petals.cli.run_server $MODEL_NAME --block_indices 12:24 \
  --initial_peers SEE_THE_OUTPUT_OF_THE_1ST_PEER &> server2.log &

tail -f server1.log server2.log  # view logs for both servers

Then launch pytest:

export MODEL_NAME=bloom-testing/test-bloomd-560m-main REF_NAME=bigscience/bloom-560m
export INITIAL_PEERS=/ip4/127.0.0.1/tcp/31337/p2p/QmS9KwZptnVdB9FFV7uGgaTq4sEKBwcYeKZDfSpyKDUd1g
PYTHONPATH=. pytest tests --durations=0 --durations-min=1.0 -v

After you're done, you can terminate the servers and ensure that no zombie processes are left with pkill -f petals.cli.run_server && pkill -f p2p.

The automated tests use a more complex server configuration that can be found here.

Code style

We use black and isort for all pull requests. Before committing your code, simply run black . && isort . and you will be fine.


This project is a part of the BigScience research workshop.