Commit Graph

72 Commits (main)

Author SHA1 Message Date
Artem Chumachenko d6f4f80f3f
Fix Mixtral-related issues (#570)
This PR fixes problems related to #569:
- block initialization
- throughput calculation and cache usage
- mixtral in tests

Beam search is removed for Mixtral and Llama for now. Those models use DynamicCache, which requires special function to change: (see https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py#L161)

---------

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
1 month ago
Denis Mazur 0d91bbdac3
Bump transformers and accelerate versions (#554)
Bump versions for transformers and accelerate, remove falcon-rw-1b CI tests
3 months ago
Alexander Borzunov 82a97d6e9e
Fix beam search in GPU clients (#531)
Fixes #503.
7 months ago
Alexander Borzunov 47d50e1e29
Improve default arguments for clients and servers (#530)
This PR updates multiple default arguments in clients and servers:

1. **The client defaults to `torch_dtype=torch.float32` instead of `torch_dtype="auto"`.**

    The old default was to load weights in the dtype they are saved in (usually bfloat16/float16), which caused issues when the client was run on CPU (the default unless you call `.cuda()`). Specifically, bfloat16 is slow on most CPUs (unless a CPU supports AVX512) and float16 can't be run natively and leads to an exception. This default was a legacy of the earliest Petals versions designed to run BLOOM - its embeddings were so big that they didn't fit into RAM in float32 (e.g., in Colab). The newer models don't have this issue.

    In contrast, the new default leads to good speed on all CPUs and is consistent with PyTorch and HF Transformers. Also, the client now shows "bfloat16 on non-AVX512 CPU" in all cases (previously this warning was shown only if the machine has enough RAM to fit float32 weights, which could hide the crucial reason of inference being slow).

    **Note:** This change is backward-incompatible, so we have to increase at least the minor package version (2.2.0 -> 2.3.0.dev0).

2. **The server uses 2x smaller `--attn_cache_tokens`.**

    The old default led to loading 39 (out of 80) or 78 (out of 80) blocks for popular models on some GPU types, which visibly slowed down inference due to an excess network hop. It was also leaving too much cache, so that inference slowed down much before the cache is used.

    The new default leads to more efficient block layouts and makes the inference routing algorithm choose alternative paths through other servers when a particular server already has enough active inference sessions (= its cache is full).

3. **The client's max number of retries can be limited by the `PETALS_MAX_RETRIES` env var.**

    This is to limit `ClientConfig.max_retries` in tests, so we see tracebacks instead of retrying indefinitely in case of errors.
7 months ago
Alexander Borzunov 5ce4f1a159
Store (start_block, end_block) in each DHT record for reliability (#510)
This PR fixes gaps in the DHT server info caused by unavailable DHT keys. Now, one DHT key is enough to get info about all blocks hosted by a server - so we'll see info until all keys are unavailable.

Also, this PR refactors `petals.client.routing` and `petals.server.block_selection` modules to use the common `compute_spans()` function (defined in `petals.utils.dht`) and `RemoteSpanInfo` class (defined in `petals.data_structures`).
8 months ago
Alexander Borzunov d40eb6c701
Fix prompt tuning after #464 (#501)
Unfortunately, running inference in models with `"ptune" in config.tuning_mode` was broken after #464.
8 months ago
Alexander Borzunov 6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
9 months ago
Alexander Borzunov a26559ff65
Fix `.generate(input_ids=...)` (#485) 9 months ago
Alexander Borzunov de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
9 months ago
Alexander Borzunov 063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 9 months ago
Artem Chumachenko 568f21dc3b
Add customizable input tensors (#445) 9 months ago
Alexander Borzunov 329f7d31e8
Add `blocked_servers` argument (#462)
Should be used as:

```python
model = AutoDistributedModelForCausalLM(model_name, blocked_servers=[peer_id1, peer_id2])
```
9 months ago
Alexander Borzunov 2a150770a4
Prefer longer servers for fine-tuning, exclude unreachable (#448)
We choose longer servers to minimize the number of hops but leave some randomization to distribute the load. We also exclude servers known to be unreachable.
9 months ago
Alexander Borzunov 351e96bc46
Penalize servers that use relays during rebalancing (#428)
Servers accessible only via relays may introduce issues if they are the only type of servers holding certain blocks. Specifically, a connection to such servers may be unstable or opened after a certain delay.

This PR changes their self-reported throughput, so that the rebalancing algorithm prefers to put directly available servers for hosting each block.
9 months ago
Alexander Borzunov 44fefa5e54
Add connect_timeout (#423) 10 months ago
Alexander Borzunov 8666653cf5
Fix routing through relay, default network RPS, --token, logging, readme (#399)
* Hide GeneratorExit in _iterate_inference_steps()
* Update README.md about `--public_name`
* Use .from_pretrained(..., use_auth_token=token) instead of token=token
until it's fully supported across HF libs
* Use default network speed 25 Mbit/s
* Apply relay penalty in max-throughput routing
* Replace RPS with "tokens/sec per block" in logs
* Increase default expiration
10 months ago
justheuristic e51e84631d
Update to petals.dev (#390)
Since `petals.ml` DNS record is still unavailable, we're switching everything to https://petals.dev

Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
10 months ago
justheuristic 398a384075
Inherit bitsandbytes compute dtype correctly (override peft quirk) (#377) 10 months ago
justheuristic 1ab35c2826
Typo in inference_session.py 10 months ago
Alexander Borzunov 62d9ed5ce7
Implement shortest-path routing for inference (#362)
This PR:

1. **Adds shortest path routing for inference.** We build a graph with client-server and server-server latencies and compute costs, as well as empirically measured overheads. For client-server latencies, we ping possible first and last servers in a sequence in `SequenceManager.update()`. We penalize servers who may not have enough cache for our request. This uses info added to DHT in #355, #356, #358.

2. **Makes a server ping neighboring servers in addition to next ones.** This is to get an opportunity to change the server even before we use all its blocks (e.g., because a neighboring server is faster). This feature is not enabled though, since it increases graph size for N servers to O(N^2) - but we may enable it if needed.

3. **Fixes a `SequenceManager` bug with the first `update()`.** Previously, this update was likely to produce incorrect information and cause to `MissingBlocksErrors` until the next update happens.
10 months ago
Alexander Borzunov 11f0d992d7
Report inference, forward, and network RPS separately (#358)
Inference RPS may be very different from forward RPS. E.g., currently bnb uses a completely different algorithm for NF4 inference. We report detailed RPS info that can be then used for shortest-path routing for inference.
10 months ago
Artem Chumachenko b9f0a5467f
Support peft LoRA adapters (#335)
Implement an option to deploy PEFT adapters to a server. Clients can set active_adapter=... to use these adapters.

---------

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: justheuristic <justheuristic@gmail.com>
10 months ago
Alexander Borzunov 158013a671
Implement direct server-to-server communication (#331)
Implement #226.
10 months ago
Alexander Borzunov 47a2b1ee65
Fix llama's lm_head.weight.requires_grad (#330)
By default, `llama's lm_head.weight.requires_grad` was True, but we expect it to be False.
11 months ago
Alexander Borzunov cb3f018f9f
Add LLaMA support (#323)
This PR:

1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.

    - BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
    - LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).

2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.

3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).

4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.

Upgrade instructions:

- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and  `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
11 months ago
Max Ryabinin 3e7ae5116d
Remove unused imports and attributes (#324)
* Remove unused imports and attributes
11 months ago
Alexander Borzunov 6eb306a605
Raise error for unexpected .generate() kwargs (#315)
Now, if a user passes unexpected kwargs to `.generate()`, they are __ignored__ and the code continues working as if the argument was correctly supported. For example, people often tried passing `repetition_penalty` and didn't notice that it does not have any effect. This PR fixes this problem.
1 year ago
Alexander Borzunov 6137b1b4b0
Replace .make_sequence(..., mode="random") with mode="max_throughput" (#313)
We need to sample the next server using its throughput as the weight to actually achieve max throughput for fine-tuning.

As an example, imagine a situation where we have 3 servers with throughputs [1000, 500, 1] hosting the same blocks, then compare the uniform and weighted sampling strategies.
1 year ago
Alexander Borzunov 8f6342a861
Refactor RemoteSequenceManager (#309)
This PR:

1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**

    The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.

2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**

    `dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.

3. **Simplifies retry logic.**

    Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.

4. **Removes deprecated `RemoteTransformerBlock`.**

    `RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.

5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**

    This functions duplicate the functionality of the `RemoteSequential` constructor.

6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**

    This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
1 year ago
Alexander Borzunov c0e0e1319d
Force transformers to use config.torch_dtype by default (#307) 1 year ago
Alexander Borzunov 35662b4a16
Require bitsandbytes == 0.38.0.post2, hivemind == 1.1.7 (#302)
In particular, this PR fixes 8-bit support on nvidia16 GPUs (such as 1660) by including https://github.com/TimDettmers/bitsandbytes/pull/292. This support was requested multiple times on Discord.
1 year ago
Alexander Borzunov 21c3526ec1
Start SequenceManager's thread only after first .make_sequence() (#301)
**Why?**

- We'd like to avoid excess threads for the original sequence manager in case if we only use its slices (e.g. when we add adapters or need only a subset of model blocks):

- If we create a sequence manager just before a fork (e.g. in a web app backend or a multi-thread benchmark), we'd like to avoid excess threads in the original process and only use this thread in child processes where we actually call `.make_sequence()`.
1 year ago
Alexander Borzunov 6c6150f684
Remove use_auto_relay=True in client (#300)
`use_auto_relay=True` makes the libp2p daemon look for relays to become reachable if we are behind NAT/firewall. However, being reachable is not necessary for the Petals client, and we should not spend the relays' capacity on this.
1 year ago
Alexander Borzunov e0cef73757
Hotfix: Increase daemon_startup_timeout (#292)
For some reasons, right now 15 sec is not enough to connect to the bootstrap peers in the public swarm, as reported by multiple users and observed by me. Increasing it to 120 sec until we find the root cause of the issue.
1 year ago
Alexander Borzunov aae1f4f368
Increase default request_timeout (#276)
This PR increases `request_timeout`, since the previous default of 30 sec is not enough for many use cases.

Previously, we kept the request timeout low since we assumed that the server could freeze on dial if the target peer is behind a firewall. However, apparently, it won't freeze because libp2p has its own [dial timeout](https://github.com/libp2p/go-libp2p/blob/v0.26.0/core/network/context.go#L11).
1 year ago
Alexander Borzunov a2e7f27a5a
Improve "connect your GPU" message (#266) 1 year ago
Alexander Borzunov fee19e9b9b
Use get_logger(__name__) instead of get_logger(__file__) (#265) 1 year ago
Alexander Borzunov 55e7dc07a0
Limit max delay between retries to 15 min (#264) 1 year ago
Alexander Borzunov 4091db10bf
Lower payload size threshold for stream handlers (#251)
Hotfix: we add "// 2" since hivemind==1.1.5 serializes bfloat16 tensors in float32, so they take 2x more space.
1 year ago
Alexander Borzunov 9954cb84fe
Add `allowed_servers`, `max_retries` options to the client, improve logs (#235) 1 year ago
Artem Chumachenko d4c687daca
Fix dtype error in fine-tuning notebooks (#231) 1 year ago
Shuchang Zhou 3189b395f0
Fix a typo in error message (#227)
By the code context, it can be inferred that do_sample==False when control reaches this point.
1 year ago
Alexander Borzunov 5ff250bee9
Improve errors in case of missing blocks, suggest to join your own server (#212) 1 year ago
Alexander Borzunov 6ba63c6cc8
Fix output shape when resuming generation (#211)
Before this PR, `model.generate()` returned one excess token when resuming generation with an existing (the last token of the previous session, `session.last_token_id`). This is an unexpected behavior not convenient for the downstream apps, so this PR changes it until it's too late.
1 year ago
justheuristic 012f840f7e
Use length-weighted sampling in routing for inference (#204)
This pull-request implements a simple (1) greedy (2) latency-agnostic routing optimization that should speed up both our use cases.

Why this exists: our effort to merge full routing (ping-aware, throughut-aware, dijkstra) is in a sorry state between several branches; merging it into main would take many days.

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
1 year ago
Alexander Borzunov b4f3224cda
Make client ignore blacklist if all servers holding a block are blacklisted (#197)
If all servers holding a certain block are blacklisted, we should display errors from them instead of raising `No peers holding blocks`.

Indeed, if the error is client-caused, the client should learn its reason from the latest error messages. In turn, if the error is server/network-caused and we only have a few servers, we'd better know the error instead of banning all the servers and making the user think that no servers are available.
1 year ago
Egiazarian Vage 93bed7da5a
Support libp2p relays for NAT traversal (#186)
- Added relay options to servers
- Enabled relay options by default
- Changed hivemind version to 1.1.5
- Moved reachability check to be performed after blocks are loaded

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
1 year ago
Alexander Borzunov e27706358c
Use slightly less memory in .generate() (#177) 1 year ago
Alexander Borzunov 55698381d0
Disable chunked_forward() on AVX512 CPUs (#179) 1 year ago
Alexander Borzunov 6948a0c5ee
Allow to disable chunked forward (#176) 1 year ago