Run 100B+ language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
## FAQ 1. **What's the motivation for people to host model layers in the public swarm?** People who run inference and fine-tuning themselves get a certain speedup if they host a part of the model locally. Some may be also motivated to "give back" to the community helping them to run the model (similarly to how [BitTorrent](https://en.wikipedia.org/wiki/BitTorrent) users help others by sharing data they have already downloaded). Since it may be not enough for everyone, we are also working on introducing explicit __incentives__ ("bloom points") for people donating their GPU time to the public swarm. Once this system is ready, people who earned these points will be able to spend them on inference/fine-tuning with higher priority or increased security guarantees, or (maybe) exchange them for other rewards. 2. **Why is the platform named "Petals"?** "Petals" is a metaphor for people serving different parts of the model. Together, they host the entire language model — [BLOOM](https://huggingface.co/bigscience/bloom). While our platform focuses on BLOOM now, we aim to support more [foundation models](https://arxiv.org/abs/2108.07258) in future. ## Installation Here's how to install Petals with conda: ```bash conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia pip install -U petals ``` This script uses Anaconda to install CUDA-enabled PyTorch. If you don't have anaconda, you can get it from [here](https://www.anaconda.com/products/distribution). If you don't want anaconda, you can install PyTorch [any other way](https://pytorch.org/get-started/locally/). If you want to run models with 8-bit weights, please install **PyTorch with CUDA 11** or newer for compatility with [bitsandbytes](https://github.com/timDettmers/bitsandbytes). __System requirements:__ Petals only supports Linux for now. If you don't have a Linux machine, consider running Petals in Docker (see our [image](https://hub.docker.com/r/learningathome/petals)) or, in case of Windows, in WSL2 ([read more](https://learn.microsoft.com/en-us/windows/ai/directml/gpu-cuda-in-wsl)). CPU is enough to run a client, but you probably need a GPU to run a server efficiently. ## 🛠️ Development Petals uses pytest with a few plugins. To install them, run: ```bash conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia git clone https://github.com/bigscience-workshop/petals.git && cd petals pip install -e .[dev] ``` To run minimalistic tests, you need to make a local swarm with a small model and some servers. You may find more information about how local swarms work and how to run them in [this tutorial](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm). ```bash export MODEL_NAME=bloom-testing/test-bloomd-560m-main python -m petals.cli.run_server $MODEL_NAME --block_indices 0:12 \ --identity tests/test.id --host_maddrs /ip4/127.0.0.1/tcp/31337 --new_swarm &> server1.log & sleep 5 # wait for the first server to initialize DHT python -m petals.cli.run_server $MODEL_NAME --block_indices 12:24 \ --initial_peers SEE_THE_OUTPUT_OF_THE_1ST_PEER &> server2.log & tail -f server1.log server2.log # view logs for both servers ``` Then launch pytest: ```bash export MODEL_NAME=bloom-testing/test-bloomd-560m-main REF_NAME=bigscience/bloom-560m export INITIAL_PEERS=/ip4/127.0.0.1/tcp/31337/p2p/QmS9KwZptnVdB9FFV7uGgaTq4sEKBwcYeKZDfSpyKDUd1g PYTHONPATH=. pytest tests --durations=0 --durations-min=1.0 -v ``` After you're done, you can terminate the servers and ensure that no zombie processes are left with `pkill -f petals.cli.run_server && pkill -f p2p`. The automated tests use a more complex server configuration that can be found [here](https://github.com/bigscience-workshop/petals/blob/main/.github/workflows/run-tests.yaml). ### Code style We use [black](https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html) and [isort](https://pycqa.github.io/isort/) for all pull requests. Before committing your code, simply run `black . && isort .` and you will be fine. ## 📜 Citation Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. [Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188) _arXiv preprint arXiv:2209.01188,_ 2022. ```bibtex @article{borzunov2022petals, title = {Petals: Collaborative Inference and Fine-tuning of Large Models}, author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin}, journal = {arXiv preprint arXiv:2209.01188}, year = {2022}, url = {https://arxiv.org/abs/2209.01188} } ``` --------------------------------------------------------------------------------
This project is a part of the BigScience research workshop.