|
|
|
@ -31,15 +31,17 @@ for input_ids, labels in data_loader:
|
|
|
|
|
🚀 <b><a href="https://colab.research.google.com/drive/1Ervk6HPNS6AYVr3xVdQnY5a-TjjmLCdQ?usp=sharing">Try now in Colab</a></b>
|
|
|
|
|
</p>
|
|
|
|
|
|
|
|
|
|
Connect your own GPU and increase Petals capacity:
|
|
|
|
|
Connect your own GPU and increase Petals capacity — run this in an [Anaconda](https://www.anaconda.com) env:
|
|
|
|
|
|
|
|
|
|
```bash
|
|
|
|
|
# In an Anaconda env
|
|
|
|
|
conda install pytorch cudatoolkit=11.3 -c pytorch
|
|
|
|
|
pip install -U petals
|
|
|
|
|
python -m petals.cli.run_server bigscience/bloom-petals
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Or use our [Docker](https://www.docker.com) image:
|
|
|
|
|
|
|
|
|
|
# Or using our GPU-enabled Docker image
|
|
|
|
|
```bash
|
|
|
|
|
sudo docker run --net host --ipc host --gpus all --volume petals-cache:/cache --rm \
|
|
|
|
|
learningathome/petals:main python -m petals.cli.run_server bigscience/bloom-petals
|
|
|
|
|
```
|
|
|
|
|