Commit Graph

376 Commits

Author SHA1 Message Date
Alexander Borzunov
6137b1b4b0
Replace .make_sequence(..., mode="random") with mode="max_throughput" (#313)
We need to sample the next server using its throughput as the weight to actually achieve max throughput for fine-tuning.

As an example, imagine a situation where we have 3 servers with throughputs [1000, 500, 1] hosting the same blocks, then compare the uniform and weighted sampling strategies.
2023-05-09 22:38:20 +04:00
Alexander Borzunov
0a313bf6c5
Update hivemind to 1.1.8, enable efficient bfloat16 encoding (#311)
This PR:

1. Updates hivemind to 1.1.8 (includes https://github.com/learning-at-home/hivemind/pull/565)
2. Enables efficient bfloat16 serialization by default (`USE_LEGACY_BFLOAT16 = False`)
3. Removes logging code that was included to hivemind in https://github.com/learning-at-home/hivemind/pull/542
2023-05-07 14:57:05 +04:00
Alexander Borzunov
8f6342a861
Refactor RemoteSequenceManager (#309)
This PR:

1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**

    The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.

2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**

    `dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.

3. **Simplifies retry logic.**

    Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.

4. **Removes deprecated `RemoteTransformerBlock`.**

    `RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.

5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**

    This functions duplicate the functionality of the `RemoteSequential` constructor.

6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**

    This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 13:41:13 +04:00
Alexander Borzunov
454c193863
Fix OOMs happening in case of accelerate >= 0.16.0 (#310)
- After #285, `load_pretrained_block()` uses `accelerate.utils.set_module_tensor_to_device()`
- In accelerate>=0.16.0, it saves the tensor in the dtype previously used by the model instead of dtype of the weights (https://github.com/huggingface/accelerate/pull/920)
- Because of that, blocks and attention caches used float32, which caused OOMs
- This PR makes `load_pretrained_block()` respect `torch_dtype` (default: `"auto"`, which means reading `torch_dtype` from `config.json`)
2023-04-25 17:20:19 +04:00
Alexander Borzunov
93c4eba5d1
Bump version to 1.1.4 (#306) 2023-04-21 05:41:01 +04:00
Alexander Borzunov
c0e0e1319d
Force transformers to use config.torch_dtype by default (#307) 2023-04-13 14:41:54 +04:00
Alexander Borzunov
98be9ffe4c
Relax the rest of Hugging Face dependencies (#305) 2023-04-13 01:05:35 +04:00
Alexander Borzunov
5c0b4286b2
Suggest commands for Docker first (#304) 2023-04-13 00:00:35 +04:00
Alexander Borzunov
35662b4a16
Require bitsandbytes == 0.38.0.post2, hivemind == 1.1.7 (#302)
In particular, this PR fixes 8-bit support on nvidia16 GPUs (such as 1660) by including https://github.com/TimDettmers/bitsandbytes/pull/292. This support was requested multiple times on Discord.
2023-04-12 23:07:29 +04:00
Alexander Borzunov
21c3526ec1
Start SequenceManager's thread only after first .make_sequence() (#301)
**Why?**

- We'd like to avoid excess threads for the original sequence manager in case if we only use its slices (e.g. when we add adapters or need only a subset of model blocks):

- If we create a sequence manager just before a fork (e.g. in a web app backend or a multi-thread benchmark), we'd like to avoid excess threads in the original process and only use this thread in child processes where we actually call `.make_sequence()`.
2023-04-12 21:38:43 +04:00
Alexander Borzunov
6c6150f684
Remove use_auto_relay=True in client (#300)
`use_auto_relay=True` makes the libp2p daemon look for relays to become reachable if we are behind NAT/firewall. However, being reachable is not necessary for the Petals client, and we should not spend the relays' capacity on this.
2023-03-31 16:39:48 +04:00
Alexander Borzunov
892fa2386a
Remove CustomLinear8bitLt (#297)
This became a part of https://github.com/TimDettmers/bitsandbytes/releases/tag/0.37.0.
2023-03-29 05:21:16 +04:00
Alexander Borzunov
74d8cda8c4
Add Python 3.10 to CI (#299) 2023-03-29 04:41:07 +04:00
Alexander Borzunov
2116df08bc
Fix deps, enable 8-bit by default for TP (#298)
This PR fixes issues of #290:

- hivemind bfloat16 codec crashed on dummy tensors (with 0 elements), see https://github.com/learning-at-home/hivemind/pull/560 (this PR makes Petals depend on the latest hivemind version from the repo, it's temporary)
- transformers version check mismatched with the version allowed in `setup.cfg`

Also:

- This PR enables 8-bit by default for TP. Even though TP in 8-bit may be slower, we currently prefer to host more blocks to increase the network's stability.
2023-03-29 04:21:37 +04:00
justheuristic
987f4d2b2f
Update bitsandbytes, hivemind, transformers (#290)
- new bitsandbytes supports newer *and* older GPUs
- new hivemind supports a better bfloat16 codec

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-03-29 01:20:29 +04:00
Alexander Borzunov
e0cef73757
Hotfix: Increase daemon_startup_timeout (#292)
For some reasons, right now 15 sec is not enough to connect to the bootstrap peers in the public swarm, as reported by multiple users and observed by me. Increasing it to 120 sec until we find the root cause of the issue.
2023-03-15 17:21:30 +04:00
Alexander Borzunov
a7d3d02194
Fix invalid author email in setup.cfg (#287) 2023-03-13 06:21:09 +04:00
Alexander Borzunov
8dab37c1a9
Add benchmarks to readme (#284) 2023-03-13 05:55:27 +04:00
Max Ryabinin
793726b041
Speed up loading blocks using init with meta weights (#285)
* Init WrappedBloomBlock with meta weights

---------

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-03-13 00:49:04 +03:00
Alexander Borzunov
c519bffc59
Bump version to 1.1.3 (#278) 2023-03-01 13:04:21 +04:00
Alexander Borzunov
aae1f4f368
Increase default request_timeout (#276)
This PR increases `request_timeout`, since the previous default of 30 sec is not enough for many use cases.

Previously, we kept the request timeout low since we assumed that the server could freeze on dial if the target peer is behind a firewall. However, apparently, it won't freeze because libp2p has its own [dial timeout](https://github.com/libp2p/go-libp2p/blob/v0.26.0/core/network/context.go#L11).
2023-02-27 16:43:06 +04:00
justheuristic
fb2583b682
Use inference mode in _MergedInferenceStep (#275) 2023-02-27 13:28:01 +04:00
Alexander Borzunov
fd9400b392
Fix use_chunked_forward="auto" on non-x86_64 machines (#267)
Import of cpufeature may crash on non-x86_64 machines, so this PR makes the client import it only if necessary.
2023-02-21 06:11:53 +04:00
Alexander Borzunov
a2e7f27a5a
Improve "connect your GPU" message (#266) 2023-02-19 07:00:16 +04:00
Alexander Borzunov
fee19e9b9b
Use get_logger(__name__) instead of get_logger(__file__) (#265) 2023-02-19 05:46:17 +04:00
Alexander Borzunov
55e7dc07a0
Limit max delay between retries to 15 min (#264) 2023-02-19 05:07:21 +04:00
Alexander Borzunov
38b071135b
Show visible maddrs for public swarm too (#263) 2023-02-19 04:34:47 +04:00
Alexander Borzunov
42594e5173
Link FAQ in readme (#260) 2023-02-17 07:54:02 +04:00
Alexander Borzunov
2a5070aa1a
Improve reachability logs (#253) 2023-02-07 01:52:36 +04:00
Alexander Borzunov
4091db10bf
Lower payload size threshold for stream handlers (#251)
Hotfix: we add "// 2" since hivemind==1.1.5 serializes bfloat16 tensors in float32, so they take 2x more space.
2023-02-07 00:56:58 +04:00
Alexander Borzunov
9954cb84fe
Add allowed_servers, max_retries options to the client, improve logs (#235) 2023-02-06 02:22:18 +04:00
Alexander Borzunov
3c523ab0d2
Fix TP crashing when hypo_ids are used (#249) 2023-02-02 22:04:19 +03:00
justheuristic
b8a6788490
Fix examples/sst, add cls_model embeddings (#248) 2023-02-02 00:32:27 +06:00
justheuristic
8766a14d28
Minor changes to examples/prompt-tuning notebooks (#247)
Minor code changes required to run the notebook in a clean python environment
2023-02-01 14:10:45 +03:00
Alexander Borzunov
5367523df8
Fix typo in prompt-tuning-sst2.ipynb (#245) 2023-01-31 19:06:51 +06:00
Alexander Borzunov
b03efb1ef5
Bump version to 1.1.2 (#244) 2023-01-31 02:17:38 +06:00
Alexander Borzunov
5d7395e1b5
Prompt-tuning notebooks: suggest to use a smaller model for faster prototyping (#234) 2023-01-24 10:01:31 +04:00
Artem Chumachenko
d4c687daca
Fix dtype error in fine-tuning notebooks (#231) 2023-01-23 05:09:14 +04:00
Muhtasham Oblokulov
0ebf6de117
Add citation to readme (#219)
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-01-21 07:05:41 +04:00
justheuristic
c4938bc23e
Merge inference pools into one to increase inference speed (#225)
It turns out using a separate pool for each block has led to significant slowdown, see #224 for details.
2023-01-19 19:38:21 +04:00
Shuchang Zhou
3189b395f0
Fix a typo in error message (#227)
By the code context, it can be inferred that do_sample==False when control reaches this point.
2023-01-19 18:38:43 +04:00
Alexander Borzunov
fa5ac6e3b4
Mention BLOOMZ in readme (#221) 2023-01-18 03:23:21 +04:00
Alexander Borzunov
e651d73f11
Add one more link to the "Getting started" tutorial (#218)
Some people miss the "Try now in Colab" link or don't understand that it leads to the comprehensive tutorial, so I added one more explicit link.
2023-01-16 04:35:06 +04:00
Alexander Borzunov
af3da5bb04
Choose --num_blocks automatically for all models (#217) 2023-01-16 01:53:09 +04:00
Alexander Borzunov
cea83d3356
Bump version to 1.1.1 (#214) 2023-01-14 00:34:46 +04:00
Alexander Borzunov
702bb5a2c2
CI: Update deprecated actions, don't measure network RPS (#215)
* CI: Switch to actions/cache@v3 (v2 is deprecated)
* Don't run measure_network_rps() in tests since it doesn't work well in
CI
2023-01-13 20:16:31 +04:00
Alexander Borzunov
825f5dbf2d
CI: Convert model only when convert_model.py or setup.cfg change (#213)
This reduces the test running time by 2 times, unless convert_model.py or setup.cfg are changed.
2023-01-13 19:53:57 +04:00
Alexander Borzunov
5ff250bee9
Improve errors in case of missing blocks, suggest to join your own server (#212) 2023-01-13 17:53:00 +04:00
Alexander Borzunov
6ba63c6cc8
Fix output shape when resuming generation (#211)
Before this PR, `model.generate()` returned one excess token when resuming generation with an existing (the last token of the previous session, `session.last_token_id`). This is an unexpected behavior not convenient for the downstream apps, so this PR changes it until it's too late.
2023-01-13 16:27:10 +04:00
Alexander Borzunov
cc5e5d32c0
Don't switch blocks if it makes swarm disjoint (#210)
Even if the swarm seems to have at least 2 servers for each block, turning off on one of the servers could break it. That's because once a server is turned off, others may move to a better position, creating a significant downtime on their way. This PR prohibits switching blocks if it would make the swarm disjoint along the way.
2023-01-13 08:45:53 +04:00