Run large language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
π Try now in Colab
π¦ **Want to run Llama 2?** Request access to its weights at the βΎοΈ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and π€ [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev). π **Privacy.** Your data will be processed by other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust. π¬ **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)! ## Connect your GPU and increase Petals capacity Petals is a community-run system — we rely on people sharing their GPUs. You can check out [available models](https://health.petals.dev) and help serving one of them! As an example, here is how to host a part of [Stable Beluga 2](https://huggingface.co/stabilityai/StableBeluga2) on your GPU: π§ **Linux + Anaconda.** Run these commands for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD): ```bash conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia pip install git+https://github.com/bigscience-workshop/petals python -m petals.cli.run_server petals-team/StableBeluga2 ``` πͺ **Windows + WSL.** Follow [this guide](https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows) on our Wiki. π **Docker.** Run our [Docker](https://www.docker.com) image for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD): ```bash sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \ learningathome/petals:main \ python -m petals.cli.run_server --port 31330 petals-team/StableBeluga2 ``` π **macOS + Apple M1/M2 GPU.** Install [Homebrew](https://brew.sh/), then run these commands: ```bash brew install python python3 -m pip install git+https://github.com/bigscience-workshop/petals python3 -m petals.cli.run_server petals-team/StableBeluga2 ```π Learn more (how to use multiple GPUs, start the server on boot, etc.)
π¬ **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)! π¦ **Want to host Llama 2?** Request access to its weights at the βΎοΈ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and π€ [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), generate an π [access token](https://huggingface.co/settings/tokens), then add `--token YOUR_TOKEN_HERE` to the `python -m petals.cli.run_server` command. π **Security.** Hosting a server does not allow others to run custom code on your computer. Learn more [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). π **Thank you!** Once you load and host 10+ blocks, we can show your name or link on the [swarm monitor](https://health.petals.dev) as a way to say thanks. You can specify them with `--public_name YOUR_NAME`. ## How does it work? - Petals runs large language models like [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) and [BLOOM](https://huggingface.co/bigscience/bloom) **collaboratively** β you load a small part of the model, then join people serving the other parts to run inference or fine-tuning. - Single-batch inference runs at **up to 6 steps/sec** for **Llama 2** (70B) and ≈ 1 step/sec for BLOOM-176B. This is [up to 10x faster](https://github.com/bigscience-workshop/petals#benchmarks) than offloading, enough to build [chatbots](https://chat.petals.dev) and other interactive apps. Parallel inference reaches hundreds of tokens/sec. - Beyond classic language model APIs β you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch.
π Read paper π See FAQ
## π Tutorials, examples, and more Basic tutorials: - Getting started: [tutorial](https://colab.research.google.com/drive/1uCphNY7gfAUkdDrTx21dZZwCOUDCMPw8?usp=sharing) - Prompt-tune Llama-65B for text semantic classification: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb) - Prompt-tune BLOOM to create a personified chatbot: [tutorial](https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-personachat.ipynb) Useful tools: - [Chatbot web app](https://chat.petals.dev) (connects to Petals via an HTTP/WebSocket endpoint): [source code](https://github.com/petals-infra/chat.petals.dev) - [Monitor](https://health.petals.dev) for the public swarm: [source code](https://github.com/petals-infra/health.petals.dev) Advanced guides: - Launch a private swarm: [guide](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) - Run a custom model: [guide](https://github.com/bigscience-workshop/petals/wiki/Run-a-custom-model-with-Petals) ## Benchmarks The benchmarks below are for BLOOM-176B:Network | Single-batch inference (steps/s) |
Parallel forward (tokens/s) |
|||
---|---|---|---|---|---|
Bandwidth | Round-trip latency |
Sequence length | Batch size | ||
128 | 2048 | 1 | 64 | ||
Offloading, max. possible speed on 1x A100 1 | |||||
256 Gbit/s | 0.18 | 0.18 | 2.7 | 170.3 | |
128 Gbit/s | 0.09 | 0.09 | 2.4 | 152.8 | |
Petals on 14 heterogeneous servers across Europe and North America 2 | |||||
Real world | 0.83 | 0.79 | 32.6 | 179.4 | |
Petals on 3 servers, with one A100 each 3 | |||||
1 Gbit/s | < 5 ms | 1.71 | 1.54 | 70.0 | 253.6 |
100 Mbit/s | < 5 ms | 1.66 | 1.49 | 56.4 | 182.0 |
100 Mbit/s | 100 ms | 1.23 | 1.11 | 19.7 | 112.2 |
This project is a part of the BigScience research workshop.