|
|
|
name: Tests
|
|
|
|
|
|
|
|
on:
|
|
|
|
push:
|
|
|
|
branches: [ main ]
|
|
|
|
pull_request:
|
|
|
|
|
|
|
|
jobs:
|
|
|
|
run-tests:
|
|
|
|
strategy:
|
|
|
|
matrix:
|
|
|
|
include:
|
|
|
|
- { model: 'bigscience/bloom-560m', os: 'ubuntu', python-version: '3.8' }
|
|
|
|
- { model: 'bigscience/bloom-560m', os: 'ubuntu', python-version: '3.11' }
|
|
|
|
- { model: 'Maykeye/TinyLLama-v0', os: 'ubuntu', python-version: '3.8' }
|
|
|
|
- { model: 'Maykeye/TinyLLama-v0', os: 'ubuntu', python-version: '3.11' }
|
|
|
|
- { model: 'Maykeye/TinyLLama-v0', os: 'macos', python-version: '3.10' }
|
|
|
|
- { model: 'Maykeye/TinyLLama-v0', os: 'macos', python-version: '3.11' }
|
|
|
|
fail-fast: false
|
|
|
|
runs-on: ${{ matrix.os }}-latest
|
|
|
|
timeout-minutes: 15
|
|
|
|
steps:
|
|
|
|
- name: Increase swap space
|
|
|
|
if: ${{ matrix.os == 'ubuntu' }}
|
|
|
|
uses: pierotofy/set-swap-space@master
|
|
|
|
with:
|
|
|
|
swap-size-gb: 10
|
|
|
|
- name: Checkout
|
|
|
|
uses: actions/checkout@v3
|
|
|
|
- name: Set up Python
|
|
|
|
uses: actions/setup-python@v3
|
|
|
|
with:
|
|
|
|
python-version: ${{ matrix.python-version }}
|
|
|
|
- name: Cache dependencies
|
|
|
|
uses: actions/cache@v3
|
|
|
|
with:
|
|
|
|
path: ~/.cache/pip
|
|
|
|
key: Key-v1-${{ matrix.python-version }}-${{ hashFiles('setup.cfg') }}
|
|
|
|
- name: Install dependencies
|
|
|
|
run: |
|
|
|
|
python -m pip install --upgrade pip
|
|
|
|
pip install .[dev]
|
|
|
|
- name: Test
|
|
|
|
run: |
|
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.
Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.
### Breaking changes
If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:
```python
# Using default session
with model.inference_session(max_length=100):
output_ids = model.generate(input_ids, max_new_tokens=3)
# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```
Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
10 months ago
|
|
|
set -x # Print executed commands
|
|
|
|
export MODEL_NAME="${{ matrix.model }}"
|
|
|
|
export REF_NAME="${{ matrix.model }}"
|
|
|
|
export ADAPTER_NAME="${{ matrix.model == 'bigscience/bloom-560m' && 'artek0chumak/bloom-560m-safe-peft' || '' }}"
|
|
|
|
export TENSOR_PARALLEL_ARGS="${{ matrix.model == 'bigscience/bloom-560m' && '--tensor_parallel_devices cpu cpu' || '' }}"
|
|
|
|
|
|
|
|
# [Step 1] Set up a tiny test swarm (see https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm)
|
|
|
|
|
|
|
|
python -m petals.cli.run_dht \
|
|
|
|
--identity_path tests/bootstrap.id --host_maddrs /ip4/127.0.0.1/tcp/31337 &> bootstrap.log &
|
|
|
|
BOOTSTRAP_PID=$!
|
|
|
|
|
|
|
|
export INITIAL_PEERS=/ip4/127.0.0.1/tcp/31337/p2p/QmS9KwZptnVdB9FFV7uGgaTq4sEKBwcYeKZDfSpyKDUd1g
|
|
|
|
# ^-- multiaddr in INITIAL_PEERS is determined by --identity_path and --host_maddrs
|
|
|
|
|
|
|
|
until [ -s bootstrap.log ]; do sleep 5; done # wait for DHT init
|
|
|
|
|
|
|
|
python -m petals.cli.run_server $MODEL_NAME --adapters $ADAPTER_NAME --torch_dtype float32 --num_blocks 5 \
|
|
|
|
--mean_balance_check_period 10 \
|
|
|
|
--initial_peers $INITIAL_PEERS --throughput 1 &> server1.log &
|
|
|
|
SERVER1_PID=$!
|
|
|
|
# ^-- rebalacing test: this server chooses blocks 0:5, then sees a gap in the swarm and moves there
|
|
|
|
|
|
|
|
sleep 10 # wait for the 1st server to choose blocks
|
|
|
|
|
|
|
|
python -m petals.cli.run_server $MODEL_NAME --adapters $ADAPTER_NAME --torch_dtype float32 --block_indices 0:5 \
|
|
|
|
--identity_path tests/server2.id \
|
|
|
|
--initial_peers $INITIAL_PEERS --throughput 1 &> server2.log &
|
|
|
|
SERVER2_PID=$!
|
|
|
|
|
|
|
|
python -m petals.cli.run_server $MODEL_NAME --adapters $ADAPTER_NAME --torch_dtype float32 --num_blocks 14 \
|
|
|
|
--attn_cache_tokens 2048 --max_chunk_size_bytes 1024 \
|
|
|
|
--initial_peers $INITIAL_PEERS --throughput auto &> server3.log &
|
|
|
|
SERVER3_PID=$!
|
|
|
|
# ^-- chunking test
|
|
|
|
|
|
|
|
python -m petals.cli.run_server $MODEL_NAME $TENSOR_PARALLEL_ARGS --torch_dtype float32 --block_indices 0:2 \
|
|
|
|
--initial_peers $INITIAL_PEERS --throughput auto &> server4.log &
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
11 months ago
|
|
|
SERVER4_PID=$!
|
|
|
|
# ^-- tensor parallelism test (not compatible with adapters yet)
|
|
|
|
|
|
|
|
sleep 5 # wait for the log files to appear
|
|
|
|
|
|
|
|
tail -n 100 -f bootstrap.log server*.log &
|
|
|
|
LOGGER_PID=$!
|
|
|
|
|
|
|
|
sleep 30 # wait for servers to eval throughput, download layers, and rebalance
|
|
|
|
kill -0 $BOOTSTRAP_PID $SERVER1_PID $SERVER2_PID $SERVER3_PID $SERVER4_PID # ensure all peers survived init
|
|
|
|
|
|
|
|
# [Step 2] Run PyTest
|
|
|
|
|
|
|
|
# Necessary for @pytest.mark.forked to work properly on macOS, see https://github.com/kevlened/pytest-parallel/issues/93
|
|
|
|
export no_proxy=*
|
|
|
|
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
|
|
|
|
|
|
|
|
pytest tests --durations=0 --durations-min=1.0 -v
|
|
|
|
|
|
|
|
# [Step 3] Check if benchmarks work (their results here are meaningless since it's a tiny swarm of CPU servers)
|
|
|
|
|
|
|
|
python benchmarks/benchmark_inference.py --model $MODEL_NAME --initial_peers $INITIAL_PEERS --torch_dtype float32 \
|
|
|
|
--seq_len 3
|
|
|
|
python benchmarks/benchmark_forward.py --model $MODEL_NAME --initial_peers $INITIAL_PEERS --torch_dtype float32 \
|
|
|
|
--seq_len 3 --batch_size 3 --n_steps 1
|
|
|
|
python benchmarks/benchmark_training.py --model $MODEL_NAME --initial_peers $INITIAL_PEERS --torch_dtype float32 \
|
|
|
|
--seq_len 3 --batch_size 3 --pre_seq_len 1 --n_steps 1 --task cls
|
|
|
|
python benchmarks/benchmark_training.py --model $MODEL_NAME --initial_peers $INITIAL_PEERS --torch_dtype float32 \
|
|
|
|
--seq_len 3 --batch_size 3 --pre_seq_len 1 --n_steps 1 --task causal_lm
|
|
|
|
|
|
|
|
# [Step 4] Clean up
|
|
|
|
|
|
|
|
kill -s SIGINT $BOOTSTRAP_PID $SERVER1_PID $SERVER2_PID $SERVER3_PID $SERVER4_PID $LOGGER_PID
|
|
|
|
echo "Done!"
|