Commit Graph

495 Commits

Author SHA1 Message Date
Alexander Borzunov
b28f5016ea
Delete deprecated petals.cli scripts (#336) 2023-07-11 21:42:35 +04:00
Alexander Borzunov
fa095f6461
Use 4-bit for llama by default, use bitsandbytes 0.40.0.post3 (#340)
NF4 inference with bitsandbytes 0.40.0.post3 is ~2x faster than int8 inference, though training is still ~3x slower, see:

- [bitsandbytes 0.40.0 Release notes](https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0)
- [RPS benchmarks](https://github.com/bigscience-workshop/petals/pull/333#issuecomment-1614040385)

We've decided to use NF4 by default for LLaMA.
2023-07-11 18:53:17 +04:00
Alexander Borzunov
158013a671
Implement direct server-to-server communication (#331)
Implement #226.
2023-07-11 17:29:34 +04:00
Alexander Borzunov
4d9c26fe5c
Allow free_disk_space_for() remove arbitrary files from Petals cache (#339)
Before this PR, `free_disk_space_for()` was able to remove **(a)** only entire cached revisions (= git commits/branches) and **(b)** only from the repository we're loading right now.

This PR allows this functions to remove arbitrary files separately from any repositories.

This is useful for transition to Petals 1.2.0+, since it now uses original repos instead of the ones with converted models (see #323). In particular, the cache for `bigscience/bloom-petals` is now deprecated and should be removed in favor of `bigscience/bloom`. This is also useful as a way to free space before loading LoRA adapters (#335).
2023-07-05 14:57:59 +04:00
Alexander Borzunov
de930918a0
Support loading blocks in 4-bit (QLoRA NF4 format, disabled by default) (#333) 2023-07-03 20:13:04 +04:00
Alexander Borzunov
66a47c763e
Require pydantic < 2.0 (2.0 is incompatible with hivemind 1.1.8) (#337)
See https://github.com/learning-at-home/hivemind/pull/573.
2023-07-02 03:32:51 +04:00
Alexander Borzunov
10c72acdf4
Fix warmup steps and minor issues in benchmarks (#334)
The previous code was incorrect for the case of `warmup_steps != 1` (this mode was never used, but can be used in future).
2023-06-30 04:18:43 +04:00
Alexander Borzunov
d126ee3053
Add benchmark scripts (#319)
This PR:

- Adds benchmark scripts for inference, forward pass, and full training step (e.g. used for experiments in our paper).
- Fixes bug with dtypes in `petals.DistributedBloomForSequenceClassification`.
- (minor refactor) Moves `DTYPE_MAP` to `petals.constants` as a useful constant.
2023-06-30 01:12:59 +04:00
Alexander Borzunov
fecee8c4dc
Show license links when loading models (#332) 2023-06-24 20:19:18 +04:00
Alexander Borzunov
47a2b1ee65
Fix llama's lm_head.weight.requires_grad (#330)
By default, `llama's lm_head.weight.requires_grad` was True, but we expect it to be False.
2023-06-24 02:30:13 +04:00
Alexander Borzunov
7a37513f77
Add AutoDistributed{Model, ModelForCausalLM, ModelForSequenceClassification} (#329)
This PR adds `petals.AutoDistributed{Model, ModelForCausalLM, ModelForSequenceClassification}` classes, similar to their `transformers.Auto{Model, ModelForCausalLM, ModelForSequenceClassification}` counterparts.
2023-06-23 18:42:50 +04:00
Alexander Borzunov
cb3f018f9f
Add LLaMA support (#323)
This PR:

1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.

    - BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
    - LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).

2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.

3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).

4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.

Upgrade instructions:

- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and  `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 15:46:10 +04:00
Max Ryabinin
5c0733711a
Use number of tokens for attn_cache_size (#286)
* Use number of tokens for attn_cache_size

* Fix cache_bytes_per_block

* Rename attn_cache_size to attn_cache_tokens
2023-06-17 14:14:31 +03:00
Max Ryabinin
c839173e57
Determine block dtype in a unified manner (#325)
* Extract backend_dtype, remove duplicate DTYPE_MAP

* Use bfloat16 as the default dtype, resolve dtype in load_pretrained_block
2023-06-16 12:52:51 +03:00
Max Ryabinin
3e7ae5116d
Remove unused imports and attributes (#324)
* Remove unused imports and attributes
2023-06-11 00:44:41 +03:00
Alexander Borzunov
675bacb592
Bump version to 1.1.5 (#312) 2023-05-10 03:01:01 +04:00
Alexander Borzunov
e026952338
Abort speedtest if it runs too long (#316)
Addresses #192 and, specifically, #280.
2023-05-10 01:53:31 +04:00
Alexander Borzunov
6eb306a605
Raise error for unexpected .generate() kwargs (#315)
Now, if a user passes unexpected kwargs to `.generate()`, they are __ignored__ and the code continues working as if the argument was correctly supported. For example, people often tried passing `repetition_penalty` and didn't notice that it does not have any effect. This PR fixes this problem.
2023-05-09 23:51:57 +04:00
Alexander Borzunov
d9e7bfc949
Divide compute throughput by average no. of used blocks (#314)
See #192.
2023-05-09 23:23:08 +04:00
Alexander Borzunov
6137b1b4b0
Replace .make_sequence(..., mode="random") with mode="max_throughput" (#313)
We need to sample the next server using its throughput as the weight to actually achieve max throughput for fine-tuning.

As an example, imagine a situation where we have 3 servers with throughputs [1000, 500, 1] hosting the same blocks, then compare the uniform and weighted sampling strategies.
2023-05-09 22:38:20 +04:00
Alexander Borzunov
0a313bf6c5
Update hivemind to 1.1.8, enable efficient bfloat16 encoding (#311)
This PR:

1. Updates hivemind to 1.1.8 (includes https://github.com/learning-at-home/hivemind/pull/565)
2. Enables efficient bfloat16 serialization by default (`USE_LEGACY_BFLOAT16 = False`)
3. Removes logging code that was included to hivemind in https://github.com/learning-at-home/hivemind/pull/542
2023-05-07 14:57:05 +04:00
Alexander Borzunov
8f6342a861
Refactor RemoteSequenceManager (#309)
This PR:

1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**

    The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.

2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**

    `dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.

3. **Simplifies retry logic.**

    Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.

4. **Removes deprecated `RemoteTransformerBlock`.**

    `RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.

5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**

    This functions duplicate the functionality of the `RemoteSequential` constructor.

6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**

    This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 13:41:13 +04:00
Alexander Borzunov
454c193863
Fix OOMs happening in case of accelerate >= 0.16.0 (#310)
- After #285, `load_pretrained_block()` uses `accelerate.utils.set_module_tensor_to_device()`
- In accelerate>=0.16.0, it saves the tensor in the dtype previously used by the model instead of dtype of the weights (https://github.com/huggingface/accelerate/pull/920)
- Because of that, blocks and attention caches used float32, which caused OOMs
- This PR makes `load_pretrained_block()` respect `torch_dtype` (default: `"auto"`, which means reading `torch_dtype` from `config.json`)
2023-04-25 17:20:19 +04:00
Alexander Borzunov
93c4eba5d1
Bump version to 1.1.4 (#306) 2023-04-21 05:41:01 +04:00
Alexander Borzunov
c0e0e1319d
Force transformers to use config.torch_dtype by default (#307) 2023-04-13 14:41:54 +04:00
Alexander Borzunov
98be9ffe4c
Relax the rest of Hugging Face dependencies (#305) 2023-04-13 01:05:35 +04:00
Alexander Borzunov
5c0b4286b2
Suggest commands for Docker first (#304) 2023-04-13 00:00:35 +04:00
Alexander Borzunov
35662b4a16
Require bitsandbytes == 0.38.0.post2, hivemind == 1.1.7 (#302)
In particular, this PR fixes 8-bit support on nvidia16 GPUs (such as 1660) by including https://github.com/TimDettmers/bitsandbytes/pull/292. This support was requested multiple times on Discord.
2023-04-12 23:07:29 +04:00
Alexander Borzunov
21c3526ec1
Start SequenceManager's thread only after first .make_sequence() (#301)
**Why?**

- We'd like to avoid excess threads for the original sequence manager in case if we only use its slices (e.g. when we add adapters or need only a subset of model blocks):

- If we create a sequence manager just before a fork (e.g. in a web app backend or a multi-thread benchmark), we'd like to avoid excess threads in the original process and only use this thread in child processes where we actually call `.make_sequence()`.
2023-04-12 21:38:43 +04:00
Alexander Borzunov
6c6150f684
Remove use_auto_relay=True in client (#300)
`use_auto_relay=True` makes the libp2p daemon look for relays to become reachable if we are behind NAT/firewall. However, being reachable is not necessary for the Petals client, and we should not spend the relays' capacity on this.
2023-03-31 16:39:48 +04:00
Alexander Borzunov
892fa2386a
Remove CustomLinear8bitLt (#297)
This became a part of https://github.com/TimDettmers/bitsandbytes/releases/tag/0.37.0.
2023-03-29 05:21:16 +04:00
Alexander Borzunov
74d8cda8c4
Add Python 3.10 to CI (#299) 2023-03-29 04:41:07 +04:00
Alexander Borzunov
2116df08bc
Fix deps, enable 8-bit by default for TP (#298)
This PR fixes issues of #290:

- hivemind bfloat16 codec crashed on dummy tensors (with 0 elements), see https://github.com/learning-at-home/hivemind/pull/560 (this PR makes Petals depend on the latest hivemind version from the repo, it's temporary)
- transformers version check mismatched with the version allowed in `setup.cfg`

Also:

- This PR enables 8-bit by default for TP. Even though TP in 8-bit may be slower, we currently prefer to host more blocks to increase the network's stability.
2023-03-29 04:21:37 +04:00
justheuristic
987f4d2b2f
Update bitsandbytes, hivemind, transformers (#290)
- new bitsandbytes supports newer *and* older GPUs
- new hivemind supports a better bfloat16 codec

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-03-29 01:20:29 +04:00
Alexander Borzunov
e0cef73757
Hotfix: Increase daemon_startup_timeout (#292)
For some reasons, right now 15 sec is not enough to connect to the bootstrap peers in the public swarm, as reported by multiple users and observed by me. Increasing it to 120 sec until we find the root cause of the issue.
2023-03-15 17:21:30 +04:00
Alexander Borzunov
a7d3d02194
Fix invalid author email in setup.cfg (#287) 2023-03-13 06:21:09 +04:00
Alexander Borzunov
8dab37c1a9
Add benchmarks to readme (#284) 2023-03-13 05:55:27 +04:00
Max Ryabinin
793726b041
Speed up loading blocks using init with meta weights (#285)
* Init WrappedBloomBlock with meta weights

---------

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2023-03-13 00:49:04 +03:00
Alexander Borzunov
c519bffc59
Bump version to 1.1.3 (#278) 2023-03-01 13:04:21 +04:00
Alexander Borzunov
aae1f4f368
Increase default request_timeout (#276)
This PR increases `request_timeout`, since the previous default of 30 sec is not enough for many use cases.

Previously, we kept the request timeout low since we assumed that the server could freeze on dial if the target peer is behind a firewall. However, apparently, it won't freeze because libp2p has its own [dial timeout](https://github.com/libp2p/go-libp2p/blob/v0.26.0/core/network/context.go#L11).
2023-02-27 16:43:06 +04:00
justheuristic
fb2583b682
Use inference mode in _MergedInferenceStep (#275) 2023-02-27 13:28:01 +04:00
Alexander Borzunov
fd9400b392
Fix use_chunked_forward="auto" on non-x86_64 machines (#267)
Import of cpufeature may crash on non-x86_64 machines, so this PR makes the client import it only if necessary.
2023-02-21 06:11:53 +04:00
Alexander Borzunov
a2e7f27a5a
Improve "connect your GPU" message (#266) 2023-02-19 07:00:16 +04:00
Alexander Borzunov
fee19e9b9b
Use get_logger(__name__) instead of get_logger(__file__) (#265) 2023-02-19 05:46:17 +04:00
Alexander Borzunov
55e7dc07a0
Limit max delay between retries to 15 min (#264) 2023-02-19 05:07:21 +04:00
Alexander Borzunov
38b071135b
Show visible maddrs for public swarm too (#263) 2023-02-19 04:34:47 +04:00
Alexander Borzunov
42594e5173
Link FAQ in readme (#260) 2023-02-17 07:54:02 +04:00
Alexander Borzunov
2a5070aa1a
Improve reachability logs (#253) 2023-02-07 01:52:36 +04:00
Alexander Borzunov
4091db10bf
Lower payload size threshold for stream handlers (#251)
Hotfix: we add "// 2" since hivemind==1.1.5 serializes bfloat16 tensors in float32, so they take 2x more space.
2023-02-07 00:56:58 +04:00
Alexander Borzunov
9954cb84fe
Add allowed_servers, max_retries options to the client, improve logs (#235) 2023-02-06 02:22:18 +04:00