Commit Graph

474 Commits

Author SHA1 Message Date
Alexander Borzunov
18e93afc73
Don't install cpufeature on non-x86_64 machines (#478)
Necessary since cpufeature crashes when installing on ARM.
2023-08-24 19:57:15 +04:00
Alexander Borzunov
6967904590
Bump version to 2.1.0 (#474)
* Bump version to 2.1.0
* Suggest using resharded repo
* LLaMA -> Llama in readme
2023-08-24 19:42:19 +04:00
Alexander Borzunov
df8ab09ca2
Hide excess key message (#476)
Before:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])
```

After:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0
```

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.
2023-08-24 00:41:40 +04:00
Artem Chumachenko
a14ae7334d
Update peft to 0.5.0 version (#475)
Update peft to 0.5.0
2023-08-23 20:21:28 +04:00
Alexander Borzunov
a9b0e9ff1a
Support loading weights from Safetensors on server (#473) 2023-08-23 01:43:29 +04:00
justheuristic
4f850996bb
Change transformers version assert (#472) 2023-08-22 21:53:14 +04:00
justheuristic
9250025140
Support transformers 4.32.x (#471) 2023-08-22 20:10:29 +03:00
justheuristic
adda5f8c20
Temporarily require peft<0.5.0, transformers<4.32.0 (#470)
Peft 0.5 recently released and broke some compatilibities. This PR temporarily requires petals to use the previous stable version of peft while we work on 0.5.0 support.
2023-08-22 19:45:37 +03:00
Alexander Borzunov
de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
2023-08-20 19:18:36 +04:00
Alexander Borzunov
063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 2023-08-14 17:05:20 +04:00
Artem Chumachenko
568f21dc3b
Add customizable input tensors (#445) 2023-08-14 12:23:16 +04:00
Alexander Borzunov
329f7d31e8
Add blocked_servers argument (#462)
Should be used as:

```python
model = AutoDistributedModelForCausalLM(model_name, blocked_servers=[peer_id1, peer_id2])
```
2023-08-14 10:41:13 +04:00
Alexander Borzunov
722c4dc496
Bump version to 2.0.1.post2 (#459) 2023-08-11 09:34:05 +04:00
Alexander Borzunov
056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
2023-08-11 09:24:33 +04:00
justheuristic
55eb36ef48
Fix missing torch.cuda.synchronize for computing throughput (#456) 2023-08-09 22:59:56 +04:00
Alexander Borzunov
0e7189b3ed
benchmarks: Aggregate speed among workers, set default dtype torch32 (#454) 2023-08-09 16:50:02 +04:00
Alexander Borzunov
8c546d988a
Test Llama, rebalancing, throughput eval, and all CLI scripts (#452)
This PR extends CI to:

1. Test Llama code using [TinyLlama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
2. Test rebalancing (sets up a situation where the 1st server needs to change its original position).
3. Check if benchmark scripts run (in case someone breaks its code). Note that the benchmark results are meaningless here (since they're measured on a tiny swarm of CPU servers, with low `--n_steps`).
4. Test `petals.cli.run_dht`.
5. Increase swap space and watch free RAM (a common issue is that actions are cancelled without explanation if there's not enough RAM - so it's a useful reminder + debug tool).
6. Fix flapping tests for bloom-560m by increasing tolerance.

Other minor changes: fix `--help` messages to show defaults, fix docs, tune rebalancing constants.
2023-08-08 19:10:27 +04:00
Alexander Borzunov
df6fdd2d0b
Force using --new_swarm instead of empty --initial_peers (#451)
This prohibits passing `--initial_peers` without arguments, since it's likely to be a side-effect from `--initial_peers $INITIAL_PEERS` with the env var not set.

Users should use `--new_swarm` for that, as explained in the private swarm tutorial.
2023-08-08 04:59:55 +04:00
Alexander Borzunov
2a150770a4
Prefer longer servers for fine-tuning, exclude unreachable (#448)
We choose longer servers to minimize the number of hops but leave some randomization to distribute the load. We also exclude servers known to be unreachable.
2023-08-07 21:43:21 +04:00
Alexander Borzunov
00d48dcbe1
Override float32 in config to bfloat16 (#431) 2023-08-07 19:47:22 +04:00
justheuristic
ac9b546706
[Refactor] extract block forward, backward and inference into a separate file (#435)
This PR does not change any functionality. It merely moves stuff around.
List of changes:

handler.py/_rpc_forward became block_methods/rpc_forward
handler.py/_rpc_backward became block_methods/rpc_backward
the math bits of rpc_inference were extracted into block_methods/iterate_rpc_inference

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: artek0chumak <artek.chumak@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2023-08-07 14:32:51 +03:00
Alexander Borzunov
593d980ad8
Use bitsandbytes 0.41.1 (#442) 2023-08-07 02:33:42 +04:00
Alexander Borzunov
32fbab5192
Remove deprecated comment in fine-tuning notebook (#443) 2023-08-07 02:22:21 +04:00
Alexander Borzunov
b58141ef66
Remove distracting links from readme (#441) 2023-08-06 18:55:22 +04:00
Alexander Borzunov
679397df0c
Update Discord links from channels to forums (#440)
As our Discord community growths, we found it difficult to look for open and resolved issues in **#running-a-client** and **#running-a-server** channels, as well as navigate through interleaving conversations happening there. That's why we recreated these channels as Discord forums, where different discussions are separated into different posts.
2023-08-06 17:11:49 +04:00
Vadim Peretokin
d0b5af34cd
Fix typo and make blocks message more informative (#437)
The message really doesn't tell me much as a user, since I never touched update_period to begin with:

```
Aug 06 09:43:07.287 [WARN] [petals.server.server.run:701] Declaring blocs to DHT takes more than --update_period, consider increasing it
```

Made it better and more informative.
2023-08-06 16:47:21 +04:00
Alexander Borzunov
a1f7791d5e
Fix petals.utils.ping for servers with client-mode DHT (#430)
Fix #429.
2023-08-03 02:17:07 +02:00
Alexander Borzunov
351e96bc46
Penalize servers that use relays during rebalancing (#428)
Servers accessible only via relays may introduce issues if they are the only type of servers holding certain blocks. Specifically, a connection to such servers may be unstable or opened after a certain delay.

This PR changes their self-reported throughput, so that the rebalancing algorithm prefers to put directly available servers for hosting each block.
2023-08-03 02:00:43 +02:00
Alexander Borzunov
6a1b8a6a90
Add Stable Beluga 2 to readme (#424) 2023-07-31 01:23:56 +02:00
Alexander Borzunov
44fefa5e54
Add connect_timeout (#423) 2023-07-30 21:31:19 +02:00
Alexander Borzunov
cdc0f70653
Add Discord badge and more Discord links to readme (#422) 2023-07-30 16:07:38 +02:00
Guocheng
8072cd9d1b
Fix stale link (#418) 2023-07-25 16:21:15 +02:00
Alexander Borzunov
f3fafd14a4
Bump version to 2.0.1 (#411) 2023-07-23 18:45:19 +04:00
Alexander Borzunov
fd19c21859
Update --update_period and --expiration defaults (#410) 2023-07-23 17:22:04 +04:00
Alexander Borzunov
ffb20b585c
Update commands for hosting Llama 2 in readme (#409) 2023-07-23 13:08:07 +04:00
Alexander Borzunov
48c6b6d963
Update README.md (#407) 2023-07-23 00:41:41 +04:00
Alexander Borzunov
c153cba1fa
Add Llama 2, WSL instructions to readme (#406) 2023-07-23 00:35:19 +04:00
justheuristic
5af04524dd
Split long sequences into chunks (#403)
This PR is designed to avoid OOMs when processing long sequences that happen due to the huge attention logits matrices.

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-07-22 23:10:46 +04:00
Alexander Borzunov
30b94ef18b
If speedtest fails, assume network speed of 100 Mbit/s (#404)
The value is chosen as some safe value below average at https://health.petals.dev/

Note that if a server uses relays, the effective throughput will be further divided by 2 (see #399).
2023-07-22 18:49:37 +04:00
Alexander Borzunov
8666653cf5
Fix routing through relay, default network RPS, --token, logging, readme (#399)
* Hide GeneratorExit in _iterate_inference_steps()
* Update README.md about `--public_name`
* Use .from_pretrained(..., use_auth_token=token) instead of token=token
until it's fully supported across HF libs
* Use default network speed 25 Mbit/s
* Apply relay penalty in max-throughput routing
* Replace RPS with "tokens/sec per block" in logs
* Increase default expiration
2023-07-22 18:27:58 +04:00
Alexander Borzunov
eb0664b993
Support Python 3.11 (#393) 2023-07-22 13:07:43 +04:00
Alexander Borzunov
6e4ebb94d2
Fix deadlocks in MemoryCache (#396)
- Fix deadlocks in MemoryCache
- Set default --alloc_timeout to 1 until the MemoryCache update
2023-07-21 11:09:24 +04:00
Alexander Borzunov
b6b3ae964f
Fix --attn_cache_tokens default (#392) 2023-07-20 23:20:15 +04:00
Alexander Borzunov
d49d9ad0cf
Bump version to 2.0.0.post3 (#391) 2023-07-20 21:07:00 +04:00
justheuristic
e51e84631d
Update to petals.dev (#390)
Since `petals.ml` DNS record is still unavailable, we're switching everything to https://petals.dev

Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
2023-07-20 20:59:28 +04:00
Aleksandr Borzunov
ddcda02b06 Hardcode IPs until DNS issues get resolved 2023-07-20 08:55:35 +00:00
Alexander Borzunov
b1ff8bdd6c
Bump version to 2.0.0.post1 (#384) 2023-07-19 21:13:24 +04:00
Alexander Borzunov
e9a20e7e53
Require accelerate>=0.20.3 as transformers do (#383) 2023-07-19 20:28:23 +04:00
Alexander Borzunov
057a2fb5de
Support Llama 2 (#379) 2023-07-19 19:15:53 +04:00
Alexander Borzunov
3218534745
Fix --token arg (#378) 2023-07-19 15:25:34 +04:00