Commit Graph

59 Commits

Author SHA1 Message Date
Alexander Borzunov
056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
2023-08-11 09:24:33 +04:00
Alexander Borzunov
8c546d988a
Test Llama, rebalancing, throughput eval, and all CLI scripts (#452)
This PR extends CI to:

1. Test Llama code using [TinyLlama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
2. Test rebalancing (sets up a situation where the 1st server needs to change its original position).
3. Check if benchmark scripts run (in case someone breaks its code). Note that the benchmark results are meaningless here (since they're measured on a tiny swarm of CPU servers, with low `--n_steps`).
4. Test `petals.cli.run_dht`.
5. Increase swap space and watch free RAM (a common issue is that actions are cancelled without explanation if there's not enough RAM - so it's a useful reminder + debug tool).
6. Fix flapping tests for bloom-560m by increasing tolerance.

Other minor changes: fix `--help` messages to show defaults, fix docs, tune rebalancing constants.
2023-08-08 19:10:27 +04:00
Vadim Peretokin
d0b5af34cd
Fix typo and make blocks message more informative (#437)
The message really doesn't tell me much as a user, since I never touched update_period to begin with:

```
Aug 06 09:43:07.287 [WARN] [petals.server.server.run:701] Declaring blocs to DHT takes more than --update_period, consider increasing it
```

Made it better and more informative.
2023-08-06 16:47:21 +04:00
Alexander Borzunov
351e96bc46
Penalize servers that use relays during rebalancing (#428)
Servers accessible only via relays may introduce issues if they are the only type of servers holding certain blocks. Specifically, a connection to such servers may be unstable or opened after a certain delay.

This PR changes their self-reported throughput, so that the rebalancing algorithm prefers to put directly available servers for hosting each block.
2023-08-03 02:00:43 +02:00
Alexander Borzunov
fd19c21859
Update --update_period and --expiration defaults (#410) 2023-07-23 17:22:04 +04:00
justheuristic
5af04524dd
Split long sequences into chunks (#403)
This PR is designed to avoid OOMs when processing long sequences that happen due to the huge attention logits matrices.

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-07-22 23:10:46 +04:00
Alexander Borzunov
8666653cf5
Fix routing through relay, default network RPS, --token, logging, readme (#399)
* Hide GeneratorExit in _iterate_inference_steps()
* Update README.md about `--public_name`
* Use .from_pretrained(..., use_auth_token=token) instead of token=token
until it's fully supported across HF libs
* Use default network speed 25 Mbit/s
* Apply relay penalty in max-throughput routing
* Replace RPS with "tokens/sec per block" in logs
* Increase default expiration
2023-07-22 18:27:58 +04:00
Alexander Borzunov
b6b3ae964f
Fix --attn_cache_tokens default (#392) 2023-07-20 23:20:15 +04:00
Alexander Borzunov
057a2fb5de
Support Llama 2 (#379) 2023-07-19 19:15:53 +04:00
Alexander Borzunov
3218534745
Fix --token arg (#378) 2023-07-19 15:25:34 +04:00
justheuristic
5a8de2f1f8
Fix handler memory leak, get rid of mp.Manager (#373)
This PR removes the memory leak from somewhere within handler.py that has something to do with mp.SyncManager.
2023-07-19 13:31:47 +04:00
Alexander Borzunov
c735dd7ba3
Update transformers to 4.31.0 and peft to 0.4.0 (#371) 2023-07-19 05:15:30 +04:00
Alexander Borzunov
a6fdfc0556
Fix AssertionError on rebalancing (#370) 2023-07-19 03:22:19 +04:00
Alexander Borzunov
62d9ed5ce7
Implement shortest-path routing for inference (#362)
This PR:

1. **Adds shortest path routing for inference.** We build a graph with client-server and server-server latencies and compute costs, as well as empirically measured overheads. For client-server latencies, we ping possible first and last servers in a sequence in `SequenceManager.update()`. We penalize servers who may not have enough cache for our request. This uses info added to DHT in #355, #356, #358.

2. **Makes a server ping neighboring servers in addition to next ones.** This is to get an opportunity to change the server even before we use all its blocks (e.g., because a neighboring server is faster). This feature is not enabled though, since it increases graph size for N servers to O(N^2) - but we may enable it if needed.

3. **Fixes a `SequenceManager` bug with the first `update()`.** Previously, this update was likely to produce incorrect information and cause to `MissingBlocksErrors` until the next update happens.
2023-07-18 08:46:36 +04:00
Alexander Borzunov
11f0d992d7
Report inference, forward, and network RPS separately (#358)
Inference RPS may be very different from forward RPS. E.g., currently bnb uses a completely different algorithm for NF4 inference. We report detailed RPS info that can be then used for shortest-path routing for inference.
2023-07-17 13:45:59 +04:00
Alexander Borzunov
81c4a45ca2
Make a server ping next servers (#356)
This PR makes a server ping potential next servers in a chain and report the RTTs to DHT. This will be used for shortest-path routing.
2023-07-15 20:16:21 +04:00
Alexander Borzunov
2c8959e713
Share more info about a server in DHT (#355) 2023-07-15 03:36:31 +04:00
justheuristic
37fdcb3fe0
Switch adapters slightly faster (#353)
Currently, each `TransformerBackend.inference_step` looks for adapters and sets the correct adapter type for each block. This is not very expensive, but it can measurably affect inference time.

This pull request uses faster adapter switching with just one variable assignment, without iterating over block.modules().
2023-07-14 23:04:55 +04:00
Alexander Borzunov
9703358df0
Fix bugs in _choose_num_blocks() added in #346 (#354) 2023-07-14 22:33:48 +04:00
Alexander Borzunov
1a78638c02
Test that bitsandbytes is not imported when it's not used (#351)
We avoid importing bitsandbytes when it's not used, since bitsandbytes doesn't always find correct CUDA libs and may raise exceptions because of that.
2023-07-14 18:40:47 +04:00
justheuristic
010857a834
Estimate adapter memory overhead in choose_num_blocks() (#346)
* estimate adapter memory overhead
* reduce number of heads based on that

---------

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-07-14 01:03:42 +03:00
Artem Chumachenko
b9f0a5467f
Support peft LoRA adapters (#335)
Implement an option to deploy PEFT adapters to a server. Clients can set active_adapter=... to use these adapters.

---------

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: justheuristic <justheuristic@gmail.com>
2023-07-12 15:22:28 +03:00
Alexander Borzunov
fa095f6461
Use 4-bit for llama by default, use bitsandbytes 0.40.0.post3 (#340)
NF4 inference with bitsandbytes 0.40.0.post3 is ~2x faster than int8 inference, though training is still ~3x slower, see:

- [bitsandbytes 0.40.0 Release notes](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0)
- [RPS benchmarks](https://github.com/bigscience-workshop/petals/pull/333#issuecomment-1614040385)

We've decided to use NF4 by default for LLaMA.
2023-07-11 18:53:17 +04:00
Alexander Borzunov
158013a671
Implement direct server-to-server communication (#331)
Implement #226.
2023-07-11 17:29:34 +04:00
Alexander Borzunov
de930918a0
Support loading blocks in 4-bit (QLoRA NF4 format, disabled by default) (#333) 2023-07-03 20:13:04 +04:00
Alexander Borzunov
d126ee3053
Add benchmark scripts (#319)
This PR:

- Adds benchmark scripts for inference, forward pass, and full training step (e.g. used for experiments in our paper).
- Fixes bug with dtypes in `petals.DistributedBloomForSequenceClassification`.
- (minor refactor) Moves `DTYPE_MAP` to `petals.constants` as a useful constant.
2023-06-30 01:12:59 +04:00
Alexander Borzunov
cb3f018f9f
Add LLaMA support (#323)
This PR:

1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.

    - BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
    - LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).

2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.

3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).

4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.

Upgrade instructions:

- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and  `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 15:46:10 +04:00
Max Ryabinin
5c0733711a
Use number of tokens for attn_cache_size (#286)
* Use number of tokens for attn_cache_size

* Fix cache_bytes_per_block

* Rename attn_cache_size to attn_cache_tokens
2023-06-17 14:14:31 +03:00
Max Ryabinin
c839173e57
Determine block dtype in a unified manner (#325)
* Extract backend_dtype, remove duplicate DTYPE_MAP

* Use bfloat16 as the default dtype, resolve dtype in load_pretrained_block
2023-06-16 12:52:51 +03:00
Max Ryabinin
3e7ae5116d
Remove unused imports and attributes (#324)
* Remove unused imports and attributes
2023-06-11 00:44:41 +03:00
Alexander Borzunov
d9e7bfc949
Divide compute throughput by average no. of used blocks (#314)
See #192.
2023-05-09 23:23:08 +04:00
Alexander Borzunov
21c3526ec1
Start SequenceManager's thread only after first .make_sequence() (#301)
**Why?**

- We'd like to avoid excess threads for the original sequence manager in case if we only use its slices (e.g. when we add adapters or need only a subset of model blocks):

- If we create a sequence manager just before a fork (e.g. in a web app backend or a multi-thread benchmark), we'd like to avoid excess threads in the original process and only use this thread in child processes where we actually call `.make_sequence()`.
2023-04-12 21:38:43 +04:00
Alexander Borzunov
2116df08bc
Fix deps, enable 8-bit by default for TP (#298)
This PR fixes issues of #290:

- hivemind bfloat16 codec crashed on dummy tensors (with 0 elements), see https://github.com/learning-at-home/hivemind/pull/560 (this PR makes Petals depend on the latest hivemind version from the repo, it's temporary)
- transformers version check mismatched with the version allowed in `setup.cfg`

Also:

- This PR enables 8-bit by default for TP. Even though TP in 8-bit may be slower, we currently prefer to host more blocks to increase the network's stability.
2023-03-29 04:21:37 +04:00
Alexander Borzunov
fee19e9b9b
Use get_logger(__name__) instead of get_logger(__file__) (#265) 2023-02-19 05:46:17 +04:00
Alexander Borzunov
38b071135b
Show visible maddrs for public swarm too (#263) 2023-02-19 04:34:47 +04:00
Alexander Borzunov
2a5070aa1a
Improve reachability logs (#253) 2023-02-07 01:52:36 +04:00
justheuristic
c4938bc23e
Merge inference pools into one to increase inference speed (#225)
It turns out using a separate pool for each block has led to significant slowdown, see #224 for details.
2023-01-19 19:38:21 +04:00
Alexander Borzunov
af3da5bb04
Choose --num_blocks automatically for all models (#217) 2023-01-16 01:53:09 +04:00
Alexander Borzunov
6b12b0d050
Report server version and dht.client_mode in rpc_info(), check for updates on startup (#209)
This PR:

1. Shows the current Petals version and checks for updates on startup.
2. Reports the current version and DHT mode in `rpc_info()`, so it can be shown on http://health.petals.ml or used on clients for efficient routing.
2023-01-13 07:46:10 +04:00
justheuristic
771ca590e7
Add service checking direct reachability from peers (#195)
Servers joining from behind NATs/firewalls usually take several minutes to join a libp2p relay before they become accessible from the outside Internet. Moreover, requests to such servers are slower and more likely to fail (e.g., if the server switches a relay at the moment). If such servers host certain DHT keys, the swarm may occasionally lose read/write access to these keys, which results in:

- Clients being unable to find any servers hosting a certain block.
- All servers starting rebalancing to the same place to close the alleged "gap" in the swarm.

This PRs modifies servers so that DHT keys are only hosted on **directly reachable** servers (the ones who aren't behind NAT/firewall). This way, DHT becomes more stable and works faster. Of course, trhe servers behind NATs/firewalls still accept requests for running inference/forward/backward for blocks they hold (it's more acceptable for this kind of requests to be slower or fail).

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-01-13 03:05:39 +04:00
Alexander Borzunov
a617ce3cfa
Fix psutil-related AccessDenied crash, disable --load_in_8bit by default in case of TP (#188)
* Don't count open fds since it leads to AccessDenied crashes on some machines
* Use --load_in_8bit=False by default in case of tensor parallelism
* Install petals from PyPI in fine-tuning tutorials
2023-01-10 13:04:52 +04:00
Egiazarian Vage
93bed7da5a
Support libp2p relays for NAT traversal (#186)
- Added relay options to servers
- Enabled relay options by default
- Changed hivemind version to 1.1.5
- Moved reachability check to be performed after blocks are loaded

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-01-09 20:41:23 +04:00
justheuristic
ae9e71fe8e
Add local tensor-parallel fwd/bwd (#143)
This pull request adds an option to run Petals server on multiple local GPUs. It uses https://github.com/BlackSamorez/tensor_parallel

- 8bit approximation error same as in main (mean~=2% q0.9~=5%)
    - TP=1, 2, 3 (see screenshots above)
- forward, grad w.r.t. input and inference exact match with main with TP=1
- `>=`80% GPU utilization with 3x 1080ti, batch = 8 tokens
- throughput measured with and without TP
- TP on 1080Tis has near-linear speedup comparable to the benchmarks (see first message)


Co-authored-by: Iaroslav Lisniak <yalisnyak@nes.ru>
Co-authored-by: Andrei Panferov <andrei@blacksamorez.ru>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-01-03 18:35:51 +03:00
Alexander Borzunov
668b736031
Fix logging: do not duplicate lines, enable colors in Colab (#156) 2022-12-15 09:12:18 +04:00
Alexander Borzunov
041ad20891
Check reachability automatically and give advice how to fix it (#155)
1. If we connect to the **public swarm**, the server now **automatically checks its DHT's reachability** from the outside world using API at http://health.petals.ml This is important to disallow unreachable servers to proceed (they create issues for the clients, such as repetitive retries).

    If http://health.petals.ml is down, the server proceeds without the check (so we don't depend on it). However, if health.petals.ml is up and explicitly tells us that we are unrechable, the server shows the reason of that and how to solve it.

    The check may be disabled with the `--skip_reachability_check` option (though I can't imagine cases where someone needs to use it).

2. Added `--port` and `--public_ip` as **simplified convenience options** for users not familiar with `--host_maddrs` and `--announce_maddrs`.
2022-12-15 05:04:09 +04:00
Alexander Borzunov
73df69a117
Reset MemoryCache during rebalancings (#154)
Before this PR, if there were open inference sessions right when rebalancing is triggered, their cache was never properly destroyed.
2022-12-15 00:11:46 +04:00
Alexander Borzunov
701ec7e53e
Clean up disk space (#152) 2022-12-13 18:50:43 +04:00
justheuristic
b04982c1a2
Bump transformers to 4.25.1 (#151)
- latest accelerate, transformers, huggingface_hub
- rearrange attention caches to support https://github.com/huggingface/transformers/pull/18344
- remove unused code
- fix edge case where session crashes when receiving seq length 0
- assert transformer version when importing WrappedBloomBlock

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
2022-12-13 11:03:49 +03:00
Alexander Borzunov
e4dc938dfe
Fix OOMs during server rebalancing (#150)
The cause of OOMs were the cyclic references `TransformerBackend <-> PrioritizedTaskPool` that could not have been garbage collected properly. Still, I've added explicit tensor removal just in case.
2022-12-13 00:44:40 +04:00
Alexander Borzunov
83d9493b6c
Improve block size calculations (#149) 2022-12-12 13:15:23 +04:00