Update readme for the 1st public release (#57)

pull/59/head
Alexander Borzunov 2 years ago committed by GitHub
parent 0be21775af
commit 2eb5843852
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -1,13 +1,59 @@
# PETALS: Collaborative Inference of Large Models
<p align="center">
<img src="https://i.imgur.com/7eR7Pan.png" width="500"><br>
Decentralized platform for running 100B+ language models<br><br>
<a href="https://github.com/bigscience-workshop/petals/actions">
<img src="https://github.com/bigscience-workshop/petals/actions/workflows/run-tests.yaml/badge.svg?branch=main">
</a>
<a href="https://github.com/psf/black">
<img src="https://img.shields.io/badge/code%20style-black-000000.svg">
</a>
</p>
Run BLOOM-176B, the largest open language model, by collaborating over the Internet.
## Key features
__[EARLY PROTOTYPE]__ - this project is a work in progress. Stuff breaks and gets fixed every day. Docs are nonexistent.
If you want us to wake you up when it's ready, click Watch -> Custom and tick "Releases".
- Run inference or fine-tune [BLOOM-176B](https://huggingface.co/bigscience/bloom) by joining compute resources with people all over the Internet. No need to have high-end GPUs.
- One inference step takes ≈ 1 sec — much faster than possible with offloading. Enough for chatbots and other interactive apps.
- Employ any fine-tuning and sampling methods by accessing model's hidden states and changing its control flow — something you can't do in proprietary APIs.
Roadmap: [__Issue #12__](https://github.com/learning-at-home/bloom-demo/issues/12)
<p align="center">
<b><a href="https://petals.ml/petals.pdf">[Read paper]</a></b> | <b><a href="https://petals.ml/">[View website]</a></b>
</p>
### Installation
## How it works?
<p align="center">
<img src="https://i.imgur.com/75LFA0Y.png" width="800">
</p>
### 🚧 This project is in active development
Be careful: some features may not work, interfaces may change, and we have no detailed docs yet (see [roadmap](https://github.com/bigscience-workshop/petals/issues/12)).
A stable version of the code and a public swarm open to everyone will be released in November 2022. You can [subscribe](https://petals.ml/) to be emailed when it happens or fill in [this form](https://forms.gle/TV3wtRPeHewjZ1vH9) to help the public launch by donating GPU time. In the meantime, you can launch and use your own private swarm.
## Code examples
Solving a sequence classification task via soft prompt tuning of BLOOM-176B:
```python
# Initialize distributed BLOOM with soft prompts
model = AutoModelForPromptTuning.from_pretrained(
"bigscience/distributed-bloom")
# Define optimizer for prompts and linear head
optimizer = torch.optim.AdamW(model.parameters())
for input_ids, labels in data_loader:
# Forward pass with local and remote layers
outputs = model.forward(input_ids)
loss = cross_entropy(outputs.logits, labels)
# Distributed backward w.r.t. local params
loss.backward() # Compute model.prompts.grad
optimizer.step() # Update local params only
optimizer.zero_grad()
```
## Installation
```bash
conda install -y -c conda-forge cudatoolkit-dev==11.3.1 cudatoolkit==11.3.1 cudnn==8.2.1.32
@ -16,7 +62,6 @@ pip install -r requirements.txt
pip install -i https://test.pypi.org/simple/ bitsandbytes-cuda113
```
### Basic functionality
All tests is run on localhost
@ -89,3 +134,12 @@ pytest tests/test_block_exact_match.py
# test the full model
pytest tests/test_full_model.py
```
--------------------------------------------------------------------------------
<p align="center">
This project is a part of the <a href="https://bigscience.huggingface.co/">BigScience</a> research workshop.
</p>
<p align="center">
<img src="https://petals.ml/bigscience.png" width="150">
</p>

Loading…
Cancel
Save