Commit Graph

488 Commits

Author SHA1 Message Date
Max Ryabinin
fa464dfc99 WIP Triton+QKV merge 2023-09-03 00:19:26 +03:00
Alexander Borzunov
b4d822afb2
Force use_cache=True in config only (#497)
This reverts a part of #496 and instead overrides `use_cache` in `LlamaConfig`s only (so the correct value is visible by HF `.generate()` as well).
2023-09-03 01:16:00 +04:00
Alexander Borzunov
abd547735f
Force use_cache=True (#496) 2023-09-02 22:57:18 +04:00
Alexander Borzunov
6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
2023-08-31 10:31:03 +04:00
Alexander Borzunov
6bb3f54e39
Replace dots in repo names when building DHT prefixes (#489) 2023-08-30 23:31:39 +04:00
Alexander Borzunov
02fc71eb25
Fix race condition in MemoryCache (#487) 2023-08-30 14:13:43 +04:00
Alexander Borzunov
dc0072fde1
Wait for DHT storing state OFFLINE on shutdown (#486) 2023-08-30 07:48:11 +04:00
Alexander Borzunov
a26559ff65
Fix .generate(input_ids=...) (#485) 2023-08-30 06:59:33 +04:00
Alexander Borzunov
459933f846
Remove no-op process in PrioritizedTaskPool (#484)
Please revert this if you ever need to make `PrioritizedTaskPool` a process again.
2023-08-30 06:07:04 +04:00
Alexander Borzunov
26ebbfe8f0
Support macOS (#477)
This PR makes both clients and servers work on macOS. Specifically, it:

- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
2023-08-29 07:49:27 +04:00
Alexander Borzunov
75e516a8c1
Refactor readme (#482) 2023-08-28 19:09:13 +04:00
justheuristic
c08d09c4d3
Rewrite MemoryCache alloc_timeout logic (#434)
-    rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
-    bugfix: inference timeout is now measured from the moment the request is received
    -    previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
    -    now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
-    a request for inference with no timeout will now fail instantly if there is not enough memory available
-    dtype number of bytes is now correctly determined for int, bool & other types


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
2023-08-28 16:01:50 +03:00
Alexander Borzunov
90840dfea2
Fix requiring transformers>=4.32.0 (#480) 2023-08-26 05:32:16 +04:00
Alexander Borzunov
915b357740
Require transformers>=4.32.0 (#479)
It's necessary to load https://huggingface.co/petals-team/StableBeluga2 since it doesn't have deprecated `inv_freq` weights.
2023-08-25 01:37:30 +04:00
Alexander Borzunov
18e93afc73
Don't install cpufeature on non-x86_64 machines (#478)
Necessary since cpufeature crashes when installing on ARM.
2023-08-24 19:57:15 +04:00
Alexander Borzunov
6967904590
Bump version to 2.1.0 (#474)
* Bump version to 2.1.0
* Suggest using resharded repo
* LLaMA -> Llama in readme
2023-08-24 19:42:19 +04:00
Alexander Borzunov
df8ab09ca2
Hide excess key message (#476)
Before:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])
```

After:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0
```

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.
2023-08-24 00:41:40 +04:00
Artem Chumachenko
a14ae7334d
Update peft to 0.5.0 version (#475)
Update peft to 0.5.0
2023-08-23 20:21:28 +04:00
Alexander Borzunov
a9b0e9ff1a
Support loading weights from Safetensors on server (#473) 2023-08-23 01:43:29 +04:00
justheuristic
4f850996bb
Change transformers version assert (#472) 2023-08-22 21:53:14 +04:00
justheuristic
9250025140
Support transformers 4.32.x (#471) 2023-08-22 20:10:29 +03:00
justheuristic
adda5f8c20
Temporarily require peft<0.5.0, transformers<4.32.0 (#470)
Peft 0.5 recently released and broke some compatilibities. This PR temporarily requires petals to use the previous stable version of peft while we work on 0.5.0 support.
2023-08-22 19:45:37 +03:00
Alexander Borzunov
de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
2023-08-20 19:18:36 +04:00
Alexander Borzunov
063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 2023-08-14 17:05:20 +04:00
Artem Chumachenko
568f21dc3b
Add customizable input tensors (#445) 2023-08-14 12:23:16 +04:00
Alexander Borzunov
329f7d31e8
Add blocked_servers argument (#462)
Should be used as:

```python
model = AutoDistributedModelForCausalLM(model_name, blocked_servers=[peer_id1, peer_id2])
```
2023-08-14 10:41:13 +04:00
Alexander Borzunov
722c4dc496
Bump version to 2.0.1.post2 (#459) 2023-08-11 09:34:05 +04:00
Alexander Borzunov
056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
2023-08-11 09:24:33 +04:00
justheuristic
55eb36ef48
Fix missing torch.cuda.synchronize for computing throughput (#456) 2023-08-09 22:59:56 +04:00
Alexander Borzunov
0e7189b3ed
benchmarks: Aggregate speed among workers, set default dtype torch32 (#454) 2023-08-09 16:50:02 +04:00
Alexander Borzunov
8c546d988a
Test Llama, rebalancing, throughput eval, and all CLI scripts (#452)
This PR extends CI to:

1. Test Llama code using [TinyLlama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
2. Test rebalancing (sets up a situation where the 1st server needs to change its original position).
3. Check if benchmark scripts run (in case someone breaks its code). Note that the benchmark results are meaningless here (since they're measured on a tiny swarm of CPU servers, with low `--n_steps`).
4. Test `petals.cli.run_dht`.
5. Increase swap space and watch free RAM (a common issue is that actions are cancelled without explanation if there's not enough RAM - so it's a useful reminder + debug tool).
6. Fix flapping tests for bloom-560m by increasing tolerance.

Other minor changes: fix `--help` messages to show defaults, fix docs, tune rebalancing constants.
2023-08-08 19:10:27 +04:00
Alexander Borzunov
df6fdd2d0b
Force using --new_swarm instead of empty --initial_peers (#451)
This prohibits passing `--initial_peers` without arguments, since it's likely to be a side-effect from `--initial_peers $INITIAL_PEERS` with the env var not set.

Users should use `--new_swarm` for that, as explained in the private swarm tutorial.
2023-08-08 04:59:55 +04:00
Alexander Borzunov
2a150770a4
Prefer longer servers for fine-tuning, exclude unreachable (#448)
We choose longer servers to minimize the number of hops but leave some randomization to distribute the load. We also exclude servers known to be unreachable.
2023-08-07 21:43:21 +04:00
Alexander Borzunov
00d48dcbe1
Override float32 in config to bfloat16 (#431) 2023-08-07 19:47:22 +04:00
justheuristic
ac9b546706
[Refactor] extract block forward, backward and inference into a separate file (#435)
This PR does not change any functionality. It merely moves stuff around.
List of changes:

handler.py/_rpc_forward became block_methods/rpc_forward
handler.py/_rpc_backward became block_methods/rpc_backward
the math bits of rpc_inference were extracted into block_methods/iterate_rpc_inference

---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: artek0chumak <artek.chumak@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2023-08-07 14:32:51 +03:00
Alexander Borzunov
593d980ad8
Use bitsandbytes 0.41.1 (#442) 2023-08-07 02:33:42 +04:00
Alexander Borzunov
32fbab5192
Remove deprecated comment in fine-tuning notebook (#443) 2023-08-07 02:22:21 +04:00
Alexander Borzunov
b58141ef66
Remove distracting links from readme (#441) 2023-08-06 18:55:22 +04:00
Alexander Borzunov
679397df0c
Update Discord links from channels to forums (#440)
As our Discord community growths, we found it difficult to look for open and resolved issues in **#running-a-client** and **#running-a-server** channels, as well as navigate through interleaving conversations happening there. That's why we recreated these channels as Discord forums, where different discussions are separated into different posts.
2023-08-06 17:11:49 +04:00
Vadim Peretokin
d0b5af34cd
Fix typo and make blocks message more informative (#437)
The message really doesn't tell me much as a user, since I never touched update_period to begin with:

```
Aug 06 09:43:07.287 [WARN] [petals.server.server.run:701] Declaring blocs to DHT takes more than --update_period, consider increasing it
```

Made it better and more informative.
2023-08-06 16:47:21 +04:00
Alexander Borzunov
a1f7791d5e
Fix petals.utils.ping for servers with client-mode DHT (#430)
Fix #429.
2023-08-03 02:17:07 +02:00
Alexander Borzunov
351e96bc46
Penalize servers that use relays during rebalancing (#428)
Servers accessible only via relays may introduce issues if they are the only type of servers holding certain blocks. Specifically, a connection to such servers may be unstable or opened after a certain delay.

This PR changes their self-reported throughput, so that the rebalancing algorithm prefers to put directly available servers for hosting each block.
2023-08-03 02:00:43 +02:00
Alexander Borzunov
6a1b8a6a90
Add Stable Beluga 2 to readme (#424) 2023-07-31 01:23:56 +02:00
Alexander Borzunov
44fefa5e54
Add connect_timeout (#423) 2023-07-30 21:31:19 +02:00
Alexander Borzunov
cdc0f70653
Add Discord badge and more Discord links to readme (#422) 2023-07-30 16:07:38 +02:00
Guocheng
8072cd9d1b
Fix stale link (#418) 2023-07-25 16:21:15 +02:00
Alexander Borzunov
f3fafd14a4
Bump version to 2.0.1 (#411) 2023-07-23 18:45:19 +04:00
Alexander Borzunov
fd19c21859
Update --update_period and --expiration defaults (#410) 2023-07-23 17:22:04 +04:00
Alexander Borzunov
ffb20b585c
Update commands for hosting Llama 2 in readme (#409) 2023-07-23 13:08:07 +04:00
Alexander Borzunov
48c6b6d963
Update README.md (#407) 2023-07-23 00:41:41 +04:00