2022-07-15 22:59:23 +00:00
|
|
|
name: Tests
|
|
|
|
|
|
|
|
on:
|
|
|
|
push:
|
2022-07-15 23:11:17 +00:00
|
|
|
branches: [ main ]
|
2022-07-15 22:59:23 +00:00
|
|
|
pull_request:
|
|
|
|
|
|
|
|
jobs:
|
|
|
|
run-tests:
|
|
|
|
runs-on: ubuntu-latest
|
|
|
|
strategy:
|
|
|
|
matrix:
|
2023-07-22 09:07:43 +00:00
|
|
|
python-version: [ '3.8', '3.9', '3.10', '3.11' ]
|
2022-07-15 22:59:23 +00:00
|
|
|
fail-fast: false
|
|
|
|
timeout-minutes: 15
|
|
|
|
steps:
|
2023-01-13 15:53:57 +00:00
|
|
|
- name: Checkout
|
2023-01-19 15:38:21 +00:00
|
|
|
uses: actions/checkout@v3
|
2022-07-15 22:59:23 +00:00
|
|
|
- name: Set up Python
|
2023-03-29 00:41:07 +00:00
|
|
|
uses: actions/setup-python@v3
|
2022-07-15 22:59:23 +00:00
|
|
|
with:
|
|
|
|
python-version: ${{ matrix.python-version }}
|
|
|
|
- name: Cache dependencies
|
2023-01-13 16:16:31 +00:00
|
|
|
uses: actions/cache@v3
|
2022-07-15 22:59:23 +00:00
|
|
|
with:
|
|
|
|
path: ~/.cache/pip
|
2022-11-30 06:41:13 +00:00
|
|
|
key: Key-v1-${{ matrix.python-version }}-${{ hashFiles('setup.cfg') }}
|
2022-07-15 22:59:23 +00:00
|
|
|
- name: Install dependencies
|
|
|
|
run: |
|
|
|
|
python -m pip install --upgrade pip
|
2022-11-30 06:41:13 +00:00
|
|
|
pip install .[dev]
|
2022-07-15 22:59:23 +00:00
|
|
|
- name: Test
|
|
|
|
run: |
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
export MODEL_NAME=bigscience/bloom-560m
|
2022-08-10 08:03:10 +00:00
|
|
|
export REF_NAME=bigscience/bloom-560m
|
2023-07-12 12:22:28 +00:00
|
|
|
export ADAPTER_NAME=artek0chumak/bloom-560m-safe-peft
|
2022-07-19 01:28:04 +00:00
|
|
|
|
2022-11-30 06:41:13 +00:00
|
|
|
python -m petals.cli.run_server --converted_model_name_or_path $MODEL_NAME --block_indices 0:12 \
|
2022-11-27 23:44:41 +00:00
|
|
|
--new_swarm --identity tests/test.id --host_maddrs /ip4/127.0.0.1/tcp/31337 --throughput 1 \
|
2023-07-12 12:22:28 +00:00
|
|
|
--torch_dtype float32 --compression NONE --attn_cache_tokens 2048 --adapters $ADAPTER_NAME &> server1.log &
|
2022-07-15 22:59:23 +00:00
|
|
|
SERVER1_PID=$!
|
2022-11-27 23:44:41 +00:00
|
|
|
|
2022-07-22 19:38:40 +00:00
|
|
|
sleep 5 # wait for the first server to initialize DHT
|
2022-11-27 23:44:41 +00:00
|
|
|
|
2022-07-15 22:59:23 +00:00
|
|
|
export INITIAL_PEERS=/ip4/127.0.0.1/tcp/31337/p2p/QmS9KwZptnVdB9FFV7uGgaTq4sEKBwcYeKZDfSpyKDUd1g
|
|
|
|
# ^-- server 1 multiaddr is determined by --identity and --host_maddrs
|
2022-11-27 23:44:41 +00:00
|
|
|
|
2022-11-30 06:41:13 +00:00
|
|
|
python -m petals.cli.run_server --converted_model_name_or_path $MODEL_NAME --block_indices 12:22 \
|
2023-07-12 12:22:28 +00:00
|
|
|
--initial_peers $INITIAL_PEERS --throughput 1 --torch_dtype float32 --adapters $ADAPTER_NAME &> server2.log &
|
2022-07-15 22:59:23 +00:00
|
|
|
SERVER2_PID=$!
|
|
|
|
|
2022-08-17 15:50:52 +00:00
|
|
|
sleep 10 # wait for initial servers to declare blocks, then let server decide which blocks to serve
|
|
|
|
|
2023-07-12 12:22:28 +00:00
|
|
|
python -m petals.cli.run_server --converted_model_name_or_path $MODEL_NAME --block_indices 12:15 \
|
|
|
|
--initial_peers $INITIAL_PEERS --throughput 1 --torch_dtype float32 --tensor_parallel_devices cpu cpu &> server3.log &
|
2022-08-17 15:50:52 +00:00
|
|
|
SERVER3_PID=$!
|
|
|
|
|
2022-11-30 06:41:13 +00:00
|
|
|
python -m petals.cli.run_server --converted_model_name_or_path $MODEL_NAME --num_blocks 3 \
|
2023-07-12 12:22:28 +00:00
|
|
|
--initial_peers $INITIAL_PEERS --throughput 1 --torch_dtype float32 --adapters $ADAPTER_NAME &> server4.log &
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
SERVER4_PID=$!
|
2022-11-27 23:44:41 +00:00
|
|
|
|
2022-08-17 15:50:52 +00:00
|
|
|
tail -n 100 -f server*.log &
|
|
|
|
LOGGER_PID=$!
|
|
|
|
sleep 30 # wait for servers to download layers
|
2022-11-27 23:44:41 +00:00
|
|
|
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
kill -0 $SERVER1_PID $SERVER2_PID $SERVER3_PID $SERVER4_PID # ensure all servers survived init
|
2022-11-27 23:44:41 +00:00
|
|
|
|
2022-12-01 07:25:55 +00:00
|
|
|
pytest tests --durations=0 --durations-min=1.0 -v
|
2022-11-27 23:44:41 +00:00
|
|
|
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
kill -0 $SERVER1_PID $SERVER2_PID $SERVER3_PID $SERVER4_PID # ensure all servers survived tests
|
2022-11-27 23:44:41 +00:00
|
|
|
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
kill -s SIGINT $SERVER1_PID $SERVER2_PID $SERVER3_PID $SERVER4_PID $LOGGER_PID
|
2022-07-15 22:59:23 +00:00
|
|
|
echo "Done!"
|