Commit Graph

108 Commits (main)

Author SHA1 Message Date
Artem Chumachenko 30f522d1a0
Fix dummy cache allocation (#574)
* Fix dummy cache allocation

* Try mps device selecting

* Rechain reloc
4 weeks ago
Artem Chumachenko d6f4f80f3f
Fix Mixtral-related issues (#570)
This PR fixes problems related to #569:
- block initialization
- throughput calculation and cache usage
- mixtral in tests

Beam search is removed for Mixtral and Llama for now. Those models use DynamicCache, which requires special function to change: (see https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py#L161)

---------

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
1 month ago
Artem Chumachenko d2fcbbc72e
Add Mixtral models (#553)
* Add somehow workable version

* Fix generation

* Fixes

* Choose right attn

* style

* fix bloom

* remove unnes

* Update src/petals/models/mixtral/model.py

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>

* fix order of init

---------

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
1 month ago
justheuristic 2ad0b2b936
Fix p2p pushing in rpc_inference (by @miaoqijun ) , support transformers 4.38.2 (#563)
This pull request solves #560 using a solution proposed by @miaoqijun .
It also bumps transformers to the latest version to test with the latest code.

---------

Co-authored-by: Yingtong Dou <ytongdou@gmail.com>
2 months ago
Alexander Borzunov 47d50e1e29
Improve default arguments for clients and servers (#530)
This PR updates multiple default arguments in clients and servers:

1. **The client defaults to `torch_dtype=torch.float32` instead of `torch_dtype="auto"`.**

    The old default was to load weights in the dtype they are saved in (usually bfloat16/float16), which caused issues when the client was run on CPU (the default unless you call `.cuda()`). Specifically, bfloat16 is slow on most CPUs (unless a CPU supports AVX512) and float16 can't be run natively and leads to an exception. This default was a legacy of the earliest Petals versions designed to run BLOOM - its embeddings were so big that they didn't fit into RAM in float32 (e.g., in Colab). The newer models don't have this issue.

    In contrast, the new default leads to good speed on all CPUs and is consistent with PyTorch and HF Transformers. Also, the client now shows "bfloat16 on non-AVX512 CPU" in all cases (previously this warning was shown only if the machine has enough RAM to fit float32 weights, which could hide the crucial reason of inference being slow).

    **Note:** This change is backward-incompatible, so we have to increase at least the minor package version (2.2.0 -> 2.3.0.dev0).

2. **The server uses 2x smaller `--attn_cache_tokens`.**

    The old default led to loading 39 (out of 80) or 78 (out of 80) blocks for popular models on some GPU types, which visibly slowed down inference due to an excess network hop. It was also leaving too much cache, so that inference slowed down much before the cache is used.

    The new default leads to more efficient block layouts and makes the inference routing algorithm choose alternative paths through other servers when a particular server already has enough active inference sessions (= its cache is full).

3. **The client's max number of retries can be limited by the `PETALS_MAX_RETRIES` env var.**

    This is to limit `ClientConfig.max_retries` in tests, so we see tracebacks instead of retrying indefinitely in case of errors.
7 months ago
FYY a2484b3053
Fix file locks in NFS-mounted directories (#517)
Fix #515.
8 months ago
Alexander Borzunov 5ce4f1a159
Store (start_block, end_block) in each DHT record for reliability (#510)
This PR fixes gaps in the DHT server info caused by unavailable DHT keys. Now, one DHT key is enough to get info about all blocks hosted by a server - so we'll see info until all keys are unavailable.

Also, this PR refactors `petals.client.routing` and `petals.server.block_selection` modules to use the common `compute_spans()` function (defined in `petals.utils.dht`) and `RemoteSpanInfo` class (defined in `petals.data_structures`).
8 months ago
Alexander Borzunov dd4a3230bc
Add Falcon support (#499)
This PR adds:

- Support for models based on `transformers.FalconModel` (the in-library format for Falcon). Tested on Falcon-40B.
- CI tests for Falcon-RW-1B.
- `--throughput dry_run` option to evaluate throughput and exit right away (implemented by @mryab).

Limitations:

- Backward pass support is broken for now, will be fixed in #500.

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
8 months ago
Alexander Borzunov 6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
9 months ago
Alexander Borzunov 02fc71eb25
Fix race condition in MemoryCache (#487) 9 months ago
Alexander Borzunov dc0072fde1
Wait for DHT storing state OFFLINE on shutdown (#486) 9 months ago
Alexander Borzunov 459933f846
Remove no-op process in PrioritizedTaskPool (#484)
Please revert this if you ever need to make `PrioritizedTaskPool` a process again.
9 months ago
Alexander Borzunov 26ebbfe8f0
Support macOS (#477)
This PR makes both clients and servers work on macOS. Specifically, it:

- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
9 months ago
justheuristic c08d09c4d3
Rewrite MemoryCache alloc_timeout logic (#434)
-    rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
-    bugfix: inference timeout is now measured from the moment the request is received
    -    previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
    -    now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
-    a request for inference with no timeout will now fail instantly if there is not enough memory available
-    dtype number of bytes is now correctly determined for int, bool & other types


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
9 months ago
Alexander Borzunov df8ab09ca2
Hide excess key message (#476)
Before:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])
```

After:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0
```

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.
9 months ago
Alexander Borzunov a9b0e9ff1a
Support loading weights from Safetensors on server (#473) 9 months ago
justheuristic 9250025140
Support transformers 4.32.x (#471) 9 months ago
Alexander Borzunov de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
9 months ago
Alexander Borzunov 063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 9 months ago
Artem Chumachenko 568f21dc3b
Add customizable input tensors (#445) 9 months ago
Alexander Borzunov 056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
9 months ago
justheuristic 55eb36ef48
Fix missing torch.cuda.synchronize for computing throughput (#456) 9 months ago
Alexander Borzunov 8c546d988a
Test Llama, rebalancing, throughput eval, and all CLI scripts (#452)
This PR extends CI to:

1. Test Llama code using [TinyLlama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
2. Test rebalancing (sets up a situation where the 1st server needs to change its original position).
3. Check if benchmark scripts run (in case someone breaks its code). Note that the benchmark results are meaningless here (since they're measured on a tiny swarm of CPU servers, with low `--n_steps`).
4. Test `petals.cli.run_dht`.
5. Increase swap space and watch free RAM (a common issue is that actions are cancelled without explanation if there's not enough RAM - so it's a useful reminder + debug tool).
6. Fix flapping tests for bloom-560m by increasing tolerance.

Other minor changes: fix `--help` messages to show defaults, fix docs, tune rebalancing constants.
9 months ago
Alexander Borzunov 00d48dcbe1
Override float32 in config to bfloat16 (#431) 9 months ago
justheuristic ac9b546706
[Refactor] extract block forward, backward and inference into a separate file (#435)
This PR does not change any functionality. It merely moves stuff around.
List of changes:

handler.py/_rpc_forward became block_methods/rpc_forward
handler.py/_rpc_backward became block_methods/rpc_backward
the math bits of rpc_inference were extracted into block_methods/iterate_rpc_inference

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: artek0chumak <artek.chumak@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
9 months ago
Vadim Peretokin d0b5af34cd
Fix typo and make blocks message more informative (#437)
The message really doesn't tell me much as a user, since I never touched update_period to begin with:

```
Aug 06 09:43:07.287 [WARN] [petals.server.server.run:701] Declaring blocs to DHT takes more than --update_period, consider increasing it
```

Made it better and more informative.
9 months ago
Alexander Borzunov 351e96bc46
Penalize servers that use relays during rebalancing (#428)
Servers accessible only via relays may introduce issues if they are the only type of servers holding certain blocks. Specifically, a connection to such servers may be unstable or opened after a certain delay.

This PR changes their self-reported throughput, so that the rebalancing algorithm prefers to put directly available servers for hosting each block.
9 months ago
Alexander Borzunov fd19c21859
Update --update_period and --expiration defaults (#410) 10 months ago
justheuristic 5af04524dd
Split long sequences into chunks (#403)
This PR is designed to avoid OOMs when processing long sequences that happen due to the huge attention logits matrices.

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
10 months ago
Alexander Borzunov 30b94ef18b
If speedtest fails, assume network speed of 100 Mbit/s (#404)
The value is chosen as some safe value below average at https://health.petals.dev/

Note that if a server uses relays, the effective throughput will be further divided by 2 (see #399).
10 months ago
Alexander Borzunov 8666653cf5
Fix routing through relay, default network RPS, --token, logging, readme (#399)
* Hide GeneratorExit in _iterate_inference_steps()
* Update README.md about `--public_name`
* Use .from_pretrained(..., use_auth_token=token) instead of token=token
until it's fully supported across HF libs
* Use default network speed 25 Mbit/s
* Apply relay penalty in max-throughput routing
* Replace RPS with "tokens/sec per block" in logs
* Increase default expiration
10 months ago
Alexander Borzunov 6e4ebb94d2
Fix deadlocks in MemoryCache (#396)
- Fix deadlocks in MemoryCache
- Set default --alloc_timeout to 1 until the MemoryCache update
10 months ago
Alexander Borzunov b6b3ae964f
Fix --attn_cache_tokens default (#392) 10 months ago
justheuristic e51e84631d
Update to petals.dev (#390)
Since `petals.ml` DNS record is still unavailable, we're switching everything to https://petals.dev

Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
10 months ago
Alexander Borzunov 057a2fb5de
Support Llama 2 (#379) 10 months ago
Alexander Borzunov 3218534745
Fix --token arg (#378) 10 months ago
justheuristic 5a8de2f1f8
Fix handler memory leak, get rid of mp.Manager (#373)
This PR removes the memory leak from somewhere within handler.py that has something to do with mp.SyncManager.
10 months ago
Alexander Borzunov c735dd7ba3
Update transformers to 4.31.0 and peft to 0.4.0 (#371) 10 months ago
Alexander Borzunov a6fdfc0556
Fix AssertionError on rebalancing (#370) 10 months ago
Alexander Borzunov 62d9ed5ce7
Implement shortest-path routing for inference (#362)
This PR:

1. **Adds shortest path routing for inference.** We build a graph with client-server and server-server latencies and compute costs, as well as empirically measured overheads. For client-server latencies, we ping possible first and last servers in a sequence in `SequenceManager.update()`. We penalize servers who may not have enough cache for our request. This uses info added to DHT in #355, #356, #358.

2. **Makes a server ping neighboring servers in addition to next ones.** This is to get an opportunity to change the server even before we use all its blocks (e.g., because a neighboring server is faster). This feature is not enabled though, since it increases graph size for N servers to O(N^2) - but we may enable it if needed.

3. **Fixes a `SequenceManager` bug with the first `update()`.** Previously, this update was likely to produce incorrect information and cause to `MissingBlocksErrors` until the next update happens.
10 months ago
Alexander Borzunov 11f0d992d7
Report inference, forward, and network RPS separately (#358)
Inference RPS may be very different from forward RPS. E.g., currently bnb uses a completely different algorithm for NF4 inference. We report detailed RPS info that can be then used for shortest-path routing for inference.
10 months ago
Alexander Borzunov 81c4a45ca2
Make a server ping next servers (#356)
This PR makes a server ping potential next servers in a chain and report the RTTs to DHT. This will be used for shortest-path routing.
10 months ago
Alexander Borzunov 2c8959e713
Share more info about a server in DHT (#355) 10 months ago
justheuristic 37fdcb3fe0
Switch adapters slightly faster (#353)
Currently, each `TransformerBackend.inference_step` looks for adapters and sets the correct adapter type for each block. This is not very expensive, but it can measurably affect inference time.

This pull request uses faster adapter switching with just one variable assignment, without iterating over block.modules().
10 months ago
Alexander Borzunov 9703358df0
Fix bugs in _choose_num_blocks() added in #346 (#354) 10 months ago
Alexander Borzunov 1a78638c02
Test that bitsandbytes is not imported when it's not used (#351)
We avoid importing bitsandbytes when it's not used, since bitsandbytes doesn't always find correct CUDA libs and may raise exceptions because of that.
10 months ago
Alexander Borzunov e12d4c666b
Spam less in server logs (#350) 10 months ago
justheuristic 010857a834
Estimate adapter memory overhead in choose_num_blocks() (#346)
* estimate adapter memory overhead
* reduce number of heads based on that

---------

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
10 months ago
Alexander Borzunov 43acfe52a7
Import petals.utils.peft only when needed to avoid unnecessary import of bitsandbytes (#345)
The motivation is the same as in #180.
10 months ago
Artem Chumachenko b9f0a5467f
Support peft LoRA adapters (#335)
Implement an option to deploy PEFT adapters to a server. Clients can set active_adapter=... to use these adapters.

---------

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: justheuristic <justheuristic@gmail.com>
10 months ago