Commit Graph

415 Commits

Author SHA1 Message Date
Ikko Eltociear Ashimine
fd30f7ce10
Fix typo in generation_algorithms.py (#364) 2023-07-18 05:44:41 +04:00
Alexander Borzunov
11f0d992d7
Report inference, forward, and network RPS separately (#358)
Inference RPS may be very different from forward RPS. E.g., currently bnb uses a completely different algorithm for NF4 inference. We report detailed RPS info that can be then used for shortest-path routing for inference.
2023-07-17 13:45:59 +04:00
Alexander Borzunov
9517dd1e3d
Update readme and "Getting started" link (#360)
This updates readme with the latest updates and fixes an old Colab link, as pointed out in #359.
2023-07-17 05:02:08 +04:00
Alexander Borzunov
3f733a96e3
Use bitsandbytes 0.40.1.post1 (#357) 2023-07-16 03:07:21 +04:00
Alexander Borzunov
81c4a45ca2
Make a server ping next servers (#356)
This PR makes a server ping potential next servers in a chain and report the RTTs to DHT. This will be used for shortest-path routing.
2023-07-15 20:16:21 +04:00
Alexander Borzunov
2c8959e713
Share more info about a server in DHT (#355) 2023-07-15 03:36:31 +04:00
justheuristic
37fdcb3fe0
Switch adapters slightly faster (#353)
Currently, each `TransformerBackend.inference_step` looks for adapters and sets the correct adapter type for each block. This is not very expensive, but it can measurably affect inference time.

This pull request uses faster adapter switching with just one variable assignment, without iterating over block.modules().
2023-07-14 23:04:55 +04:00
Alexander Borzunov
9703358df0
Fix bugs in _choose_num_blocks() added in #346 (#354) 2023-07-14 22:33:48 +04:00
Alexander Borzunov
1a78638c02
Test that bitsandbytes is not imported when it's not used (#351)
We avoid importing bitsandbytes when it's not used, since bitsandbytes doesn't always find correct CUDA libs and may raise exceptions because of that.
2023-07-14 18:40:47 +04:00
justheuristic
c511990236
Remove unused import os (#352) 2023-07-14 18:05:21 +04:00
Alexander Borzunov
e12d4c666b
Spam less in server logs (#350) 2023-07-14 02:52:52 +04:00
justheuristic
010857a834
Estimate adapter memory overhead in choose_num_blocks() (#346)
* estimate adapter memory overhead
* reduce number of heads based on that

---------

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-07-14 01:03:42 +03:00
Alexander Borzunov
f605f093f7
Support LLaMA repos without "-hf" suffix (#349) 2023-07-14 00:43:28 +04:00
Alexander Borzunov
90fbaab61e
Fix Docker build by avoiding Python 3.11 (#348)
We want to use `3.10.x` since `grpcio-tools` is not compatible with 3.11 yet. However, `python~=3.10` meant `python>=3.10, python<4.0`, so we ended up with a broken build due to python 3.11 installed.
2023-07-13 19:34:17 +04:00
Alexander Borzunov
43acfe52a7
Import petals.utils.peft only when needed to avoid unnecessary import of bitsandbytes (#345)
The motivation is the same as in #180.
2023-07-12 23:15:16 +04:00
Alexander Borzunov
294970fe18
Update Colab link 2023-07-12 17:00:15 +04:00
Alexander Borzunov
515a5120cb
Mention LLaMA in readme (#344) 2023-07-12 16:58:58 +04:00
Max Ryabinin
13f4e3a88a
Fix convergence issues and switch to LLaMA in the SST-2 example (#343)
* Fix convergence issues and switch to LLaMA in the SST-2 example
2023-07-12 15:50:54 +03:00
Artem Chumachenko
b9f0a5467f
Support peft LoRA adapters (#335)
Implement an option to deploy PEFT adapters to a server. Clients can set active_adapter=... to use these adapters.

---------

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: justheuristic <justheuristic@gmail.com>
2023-07-12 15:22:28 +03:00
Alexander Borzunov
dfc6578c8e
Use bitsandbytes 0.40.0.post4 with bias hotfix (#342)
This PR includes a bnb hotfix: 90b0ac57b0
2023-07-12 15:29:59 +04:00
Alexander Borzunov
b28f5016ea
Delete deprecated petals.cli scripts (#336) 2023-07-11 21:42:35 +04:00
Alexander Borzunov
fa095f6461
Use 4-bit for llama by default, use bitsandbytes 0.40.0.post3 (#340)
NF4 inference with bitsandbytes 0.40.0.post3 is ~2x faster than int8 inference, though training is still ~3x slower, see:

- [bitsandbytes 0.40.0 Release notes](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0)
- [RPS benchmarks](https://github.com/bigscience-workshop/petals/pull/333#issuecomment-1614040385)

We've decided to use NF4 by default for LLaMA.
2023-07-11 18:53:17 +04:00
Alexander Borzunov
158013a671
Implement direct server-to-server communication (#331)
Implement #226.
2023-07-11 17:29:34 +04:00
Alexander Borzunov
4d9c26fe5c
Allow free_disk_space_for() remove arbitrary files from Petals cache (#339)
Before this PR, `free_disk_space_for()` was able to remove **(a)** only entire cached revisions (= git commits/branches) and **(b)** only from the repository we're loading right now.

This PR allows this functions to remove arbitrary files separately from any repositories.

This is useful for transition to Petals 1.2.0+, since it now uses original repos instead of the ones with converted models (see #323). In particular, the cache for `bigscience/bloom-petals` is now deprecated and should be removed in favor of `bigscience/bloom`. This is also useful as a way to free space before loading LoRA adapters (#335).
2023-07-05 14:57:59 +04:00
Alexander Borzunov
de930918a0
Support loading blocks in 4-bit (QLoRA NF4 format, disabled by default) (#333) 2023-07-03 20:13:04 +04:00
Alexander Borzunov
66a47c763e
Require pydantic < 2.0 (2.0 is incompatible with hivemind 1.1.8) (#337)
See https://github.com/learning-at-home/hivemind/pull/573.
2023-07-02 03:32:51 +04:00
Alexander Borzunov
10c72acdf4
Fix warmup steps and minor issues in benchmarks (#334)
The previous code was incorrect for the case of `warmup_steps != 1` (this mode was never used, but can be used in future).
2023-06-30 04:18:43 +04:00
Alexander Borzunov
d126ee3053
Add benchmark scripts (#319)
This PR:

- Adds benchmark scripts for inference, forward pass, and full training step (e.g. used for experiments in our paper).
- Fixes bug with dtypes in `petals.DistributedBloomForSequenceClassification`.
- (minor refactor) Moves `DTYPE_MAP` to `petals.constants` as a useful constant.
2023-06-30 01:12:59 +04:00
Alexander Borzunov
fecee8c4dc
Show license links when loading models (#332) 2023-06-24 20:19:18 +04:00
Alexander Borzunov
47a2b1ee65
Fix llama's lm_head.weight.requires_grad (#330)
By default, `llama's lm_head.weight.requires_grad` was True, but we expect it to be False.
2023-06-24 02:30:13 +04:00
Alexander Borzunov
7a37513f77
Add AutoDistributed{Model, ModelForCausalLM, ModelForSequenceClassification} (#329)
This PR adds `petals.AutoDistributed{Model, ModelForCausalLM, ModelForSequenceClassification}` classes, similar to their `transformers.Auto{Model, ModelForCausalLM, ModelForSequenceClassification}` counterparts.
2023-06-23 18:42:50 +04:00
Alexander Borzunov
cb3f018f9f
Add LLaMA support (#323)
This PR:

1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.

    - BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
    - LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).

2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.

3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).

4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.

Upgrade instructions:

- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and  `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 15:46:10 +04:00
Max Ryabinin
5c0733711a
Use number of tokens for attn_cache_size (#286)
* Use number of tokens for attn_cache_size

* Fix cache_bytes_per_block

* Rename attn_cache_size to attn_cache_tokens
2023-06-17 14:14:31 +03:00
Max Ryabinin
c839173e57
Determine block dtype in a unified manner (#325)
* Extract backend_dtype, remove duplicate DTYPE_MAP

* Use bfloat16 as the default dtype, resolve dtype in load_pretrained_block
2023-06-16 12:52:51 +03:00
Max Ryabinin
3e7ae5116d
Remove unused imports and attributes (#324)
* Remove unused imports and attributes
2023-06-11 00:44:41 +03:00
Alexander Borzunov
675bacb592
Bump version to 1.1.5 (#312) 2023-05-10 03:01:01 +04:00
Alexander Borzunov
e026952338
Abort speedtest if it runs too long (#316)
Addresses #192 and, specifically, #280.
2023-05-10 01:53:31 +04:00
Alexander Borzunov
6eb306a605
Raise error for unexpected .generate() kwargs (#315)
Now, if a user passes unexpected kwargs to `.generate()`, they are __ignored__ and the code continues working as if the argument was correctly supported. For example, people often tried passing `repetition_penalty` and didn't notice that it does not have any effect. This PR fixes this problem.
2023-05-09 23:51:57 +04:00
Alexander Borzunov
d9e7bfc949
Divide compute throughput by average no. of used blocks (#314)
See #192.
2023-05-09 23:23:08 +04:00
Alexander Borzunov
6137b1b4b0
Replace .make_sequence(..., mode="random") with mode="max_throughput" (#313)
We need to sample the next server using its throughput as the weight to actually achieve max throughput for fine-tuning.

As an example, imagine a situation where we have 3 servers with throughputs [1000, 500, 1] hosting the same blocks, then compare the uniform and weighted sampling strategies.
2023-05-09 22:38:20 +04:00
Alexander Borzunov
0a313bf6c5
Update hivemind to 1.1.8, enable efficient bfloat16 encoding (#311)
This PR:

1. Updates hivemind to 1.1.8 (includes https://github.com/learning-at-home/hivemind/pull/565)
2. Enables efficient bfloat16 serialization by default (`USE_LEGACY_BFLOAT16 = False`)
3. Removes logging code that was included to hivemind in https://github.com/learning-at-home/hivemind/pull/542
2023-05-07 14:57:05 +04:00
Alexander Borzunov
8f6342a861
Refactor RemoteSequenceManager (#309)
This PR:

1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**

    The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.

2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**

    `dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.

3. **Simplifies retry logic.**

    Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.

4. **Removes deprecated `RemoteTransformerBlock`.**

    `RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.

5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**

    This functions duplicate the functionality of the `RemoteSequential` constructor.

6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**

    This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 13:41:13 +04:00
Alexander Borzunov
454c193863
Fix OOMs happening in case of accelerate >= 0.16.0 (#310)
- After #285, `load_pretrained_block()` uses `accelerate.utils.set_module_tensor_to_device()`
- In accelerate>=0.16.0, it saves the tensor in the dtype previously used by the model instead of dtype of the weights (https://github.com/huggingface/accelerate/pull/920)
- Because of that, blocks and attention caches used float32, which caused OOMs
- This PR makes `load_pretrained_block()` respect `torch_dtype` (default: `"auto"`, which means reading `torch_dtype` from `config.json`)
2023-04-25 17:20:19 +04:00
Alexander Borzunov
93c4eba5d1
Bump version to 1.1.4 (#306) 2023-04-21 05:41:01 +04:00
Alexander Borzunov
c0e0e1319d
Force transformers to use config.torch_dtype by default (#307) 2023-04-13 14:41:54 +04:00
Alexander Borzunov
98be9ffe4c
Relax the rest of Hugging Face dependencies (#305) 2023-04-13 01:05:35 +04:00
Alexander Borzunov
5c0b4286b2
Suggest commands for Docker first (#304) 2023-04-13 00:00:35 +04:00
Alexander Borzunov
35662b4a16
Require bitsandbytes == 0.38.0.post2, hivemind == 1.1.7 (#302)
In particular, this PR fixes 8-bit support on nvidia16 GPUs (such as 1660) by including https://github.com/TimDettmers/bitsandbytes/pull/292. This support was requested multiple times on Discord.
2023-04-12 23:07:29 +04:00
Alexander Borzunov
21c3526ec1
Start SequenceManager's thread only after first .make_sequence() (#301)
**Why?**

- We'd like to avoid excess threads for the original sequence manager in case if we only use its slices (e.g. when we add adapters or need only a subset of model blocks):

- If we create a sequence manager just before a fork (e.g. in a web app backend or a multi-thread benchmark), we'd like to avoid excess threads in the original process and only use this thread in child processes where we actually call `.make_sequence()`.
2023-04-12 21:38:43 +04:00
Alexander Borzunov
6c6150f684
Remove use_auto_relay=True in client (#300)
`use_auto_relay=True` makes the libp2p daemon look for relays to become reachable if we are behind NAT/firewall. However, being reachable is not necessary for the Petals client, and we should not spend the relays' capacity on this.
2023-03-31 16:39:48 +04:00