Commit Graph

504 Commits

Author SHA1 Message Date
justheuristic
f96be67364
Create push-docker-test2.yaml 2024-02-19 21:11:53 +03:00
Denis Mazur
faf1e71fe4 try bumping buld-push-action version 2024-02-17 20:24:53 +03:00
Denis Mazur
0d91bbdac3
Bump transformers and accelerate versions (#554)
Bump versions for transformers and accelerate, remove falcon-rw-1b CI tests
2024-02-15 15:24:37 +03:00
justheuristic
d59c15c578
Bump version for inference diagnostics (#543)
bump version for inference diagnostics
2023-11-16 06:12:30 +03:00
Max Ryabinin
03cbe90234
Optimize LLaMA for inference (#513)
* Optimize LLaMa for inference
* Fix model type detection in tests
2023-11-14 20:14:19 +03:00
justheuristic
25a0796b39
Hotfix: require peft version 0.5.0 (#539)
Peft: strict version check for now

Co-authored-by: horik <hr.mail.qaq@gmail.com>
2023-11-07 19:05:54 +03:00
justheuristic
dcce43670f
Hotfix: set transformers version <=4.34 temporarily (#538)
* fix transformers version for now


Co-authored-by: horik <hr.mail.qaq@gmail.com>
2023-11-07 18:19:19 +03:00
Alexander Borzunov
82a97d6e9e
Fix beam search in GPU clients (#531)
Fixes #503.
2023-10-23 20:13:13 +04:00
Alexander Borzunov
47d50e1e29
Improve default arguments for clients and servers (#530)
This PR updates multiple default arguments in clients and servers:

1. **The client defaults to `torch_dtype=torch.float32` instead of `torch_dtype="auto"`.**

    The old default was to load weights in the dtype they are saved in (usually bfloat16/float16), which caused issues when the client was run on CPU (the default unless you call `.cuda()`). Specifically, bfloat16 is slow on most CPUs (unless a CPU supports AVX512) and float16 can't be run natively and leads to an exception. This default was a legacy of the earliest Petals versions designed to run BLOOM - its embeddings were so big that they didn't fit into RAM in float32 (e.g., in Colab). The newer models don't have this issue.

    In contrast, the new default leads to good speed on all CPUs and is consistent with PyTorch and HF Transformers. Also, the client now shows "bfloat16 on non-AVX512 CPU" in all cases (previously this warning was shown only if the machine has enough RAM to fit float32 weights, which could hide the crucial reason of inference being slow).

    **Note:** This change is backward-incompatible, so we have to increase at least the minor package version (2.2.0 -> 2.3.0.dev0).

2. **The server uses 2x smaller `--attn_cache_tokens`.**

    The old default led to loading 39 (out of 80) or 78 (out of 80) blocks for popular models on some GPU types, which visibly slowed down inference due to an excess network hop. It was also leaving too much cache, so that inference slowed down much before the cache is used.

    The new default leads to more efficient block layouts and makes the inference routing algorithm choose alternative paths through other servers when a particular server already has enough active inference sessions (= its cache is full).

3. **The client's max number of retries can be limited by the `PETALS_MAX_RETRIES` env var.**

    This is to limit `ClientConfig.max_retries` in tests, so we see tracebacks instead of retrying indefinitely in case of errors.
2023-10-23 03:26:40 +04:00
Max Ryabinin
ae19b65095
Add position_ids argument to DistributedFalconModel (#525) 2023-10-08 22:09:46 +03:00
Alexander Borzunov
1d9401ddce
Update README.md (#520) 2023-09-22 06:16:32 +04:00
FYY
a2484b3053
Fix file locks in NFS-mounted directories (#517)
Fix #515.
2023-09-20 04:01:23 +04:00
Alexander Borzunov
5ce4f1a159
Store (start_block, end_block) in each DHT record for reliability (#510)
This PR fixes gaps in the DHT server info caused by unavailable DHT keys. Now, one DHT key is enough to get info about all blocks hosted by a server - so we'll see info until all keys are unavailable.

Also, this PR refactors `petals.client.routing` and `petals.server.block_selection` modules to use the common `compute_spans()` function (defined in `petals.utils.dht`) and `RemoteSpanInfo` class (defined in `petals.data_structures`).
2023-09-15 23:53:57 +04:00
Alexander Borzunov
158621677b
Bump version to 2.2.0 (#502) 2023-09-06 19:43:30 +04:00
Max Ryabinin
1ebd88ae7b
Optimize the Falcon block for inference (#500)
This PR attempts to optimize the inference of Falcon models in the single-token setup by reducing the majority of Python overhead and making several assumptions about the setup. Specifically,

* Layer normalization, QKV projection (with splitting) and rotary embeddings are executed through CUDA graphs, which reduces most overhead related to small kernel launche
* If no sin/cos tensors are cached by the rotary embedding layer, we cache them for 8192 tokens (INFERENCE_MAX_LENGTH) during the first forward pass. In general, it should be beneficial to always run a max-length sequence before starting a block, but this is a question for another PR

The PR also adds a small test to ensure that the results (without quantization) of the block before and after quantization indeed match.

Lastly, the pull request makes the backward pass work (as discussed in https://github.com/bigscience-workshop/petals/pull/499) by making cached sin/cos for RotaryEmbedding into buffers and disabling the inference mode during their creation.
2023-09-04 15:38:32 +03:00
Alexander Borzunov
d40eb6c701
Fix prompt tuning after #464 (#501)
Unfortunately, running inference in models with `"ptune" in config.tuning_mode` was broken after #464.
2023-09-04 12:25:29 +04:00
Alexander Borzunov
dd4a3230bc
Add Falcon support (#499)
This PR adds:

- Support for models based on `transformers.FalconModel` (the in-library format for Falcon). Tested on Falcon-40B.
- CI tests for Falcon-RW-1B.
- `--throughput dry_run` option to evaluate throughput and exit right away (implemented by @mryab).

Limitations:

- Backward pass support is broken for now, will be fixed in #500.

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
2023-09-04 01:45:37 +04:00
Alexander Borzunov
b4d822afb2
Force use_cache=True in config only (#497)
This reverts a part of #496 and instead overrides `use_cache` in `LlamaConfig`s only (so the correct value is visible by HF `.generate()` as well).
2023-09-03 01:16:00 +04:00
Alexander Borzunov
abd547735f
Force use_cache=True (#496) 2023-09-02 22:57:18 +04:00
Alexander Borzunov
6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
2023-08-31 10:31:03 +04:00
Alexander Borzunov
6bb3f54e39
Replace dots in repo names when building DHT prefixes (#489) 2023-08-30 23:31:39 +04:00
Alexander Borzunov
02fc71eb25
Fix race condition in MemoryCache (#487) 2023-08-30 14:13:43 +04:00
Alexander Borzunov
dc0072fde1
Wait for DHT storing state OFFLINE on shutdown (#486) 2023-08-30 07:48:11 +04:00
Alexander Borzunov
a26559ff65
Fix .generate(input_ids=...) (#485) 2023-08-30 06:59:33 +04:00
Alexander Borzunov
459933f846
Remove no-op process in PrioritizedTaskPool (#484)
Please revert this if you ever need to make `PrioritizedTaskPool` a process again.
2023-08-30 06:07:04 +04:00
Alexander Borzunov
26ebbfe8f0
Support macOS (#477)
This PR makes both clients and servers work on macOS. Specifically, it:

- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
2023-08-29 07:49:27 +04:00
Alexander Borzunov
75e516a8c1
Refactor readme (#482) 2023-08-28 19:09:13 +04:00
justheuristic
c08d09c4d3
Rewrite MemoryCache alloc_timeout logic (#434)
-    rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
-    bugfix: inference timeout is now measured from the moment the request is received
    -    previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
    -    now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
-    a request for inference with no timeout will now fail instantly if there is not enough memory available
-    dtype number of bytes is now correctly determined for int, bool & other types


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
2023-08-28 16:01:50 +03:00
Alexander Borzunov
90840dfea2
Fix requiring transformers>=4.32.0 (#480) 2023-08-26 05:32:16 +04:00
Alexander Borzunov
915b357740
Require transformers>=4.32.0 (#479)
It's necessary to load https://huggingface.co/petals-team/StableBeluga2 since it doesn't have deprecated `inv_freq` weights.
2023-08-25 01:37:30 +04:00
Alexander Borzunov
18e93afc73
Don't install cpufeature on non-x86_64 machines (#478)
Necessary since cpufeature crashes when installing on ARM.
2023-08-24 19:57:15 +04:00
Alexander Borzunov
6967904590
Bump version to 2.1.0 (#474)
* Bump version to 2.1.0
* Suggest using resharded repo
* LLaMA -> Llama in readme
2023-08-24 19:42:19 +04:00
Alexander Borzunov
df8ab09ca2
Hide excess key message (#476)
Before:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])
```

After:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0
```

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.
2023-08-24 00:41:40 +04:00
Artem Chumachenko
a14ae7334d
Update peft to 0.5.0 version (#475)
Update peft to 0.5.0
2023-08-23 20:21:28 +04:00
Alexander Borzunov
a9b0e9ff1a
Support loading weights from Safetensors on server (#473) 2023-08-23 01:43:29 +04:00
justheuristic
4f850996bb
Change transformers version assert (#472) 2023-08-22 21:53:14 +04:00
justheuristic
9250025140
Support transformers 4.32.x (#471) 2023-08-22 20:10:29 +03:00
justheuristic
adda5f8c20
Temporarily require peft<0.5.0, transformers<4.32.0 (#470)
Peft 0.5 recently released and broke some compatilibities. This PR temporarily requires petals to use the previous stable version of peft while we work on 0.5.0 support.
2023-08-22 19:45:37 +03:00
Alexander Borzunov
de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
2023-08-20 19:18:36 +04:00
Alexander Borzunov
063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 2023-08-14 17:05:20 +04:00
Artem Chumachenko
568f21dc3b
Add customizable input tensors (#445) 2023-08-14 12:23:16 +04:00
Alexander Borzunov
329f7d31e8
Add blocked_servers argument (#462)
Should be used as:

```python
model = AutoDistributedModelForCausalLM(model_name, blocked_servers=[peer_id1, peer_id2])
```
2023-08-14 10:41:13 +04:00
Alexander Borzunov
722c4dc496
Bump version to 2.0.1.post2 (#459) 2023-08-11 09:34:05 +04:00
Alexander Borzunov
056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
2023-08-11 09:24:33 +04:00
justheuristic
55eb36ef48
Fix missing torch.cuda.synchronize for computing throughput (#456) 2023-08-09 22:59:56 +04:00
Alexander Borzunov
0e7189b3ed
benchmarks: Aggregate speed among workers, set default dtype torch32 (#454) 2023-08-09 16:50:02 +04:00
Alexander Borzunov
8c546d988a
Test Llama, rebalancing, throughput eval, and all CLI scripts (#452)
This PR extends CI to:

1. Test Llama code using [TinyLlama-v0](https://huggingface.co/Maykeye/TinyLLama-v0).
2. Test rebalancing (sets up a situation where the 1st server needs to change its original position).
3. Check if benchmark scripts run (in case someone breaks its code). Note that the benchmark results are meaningless here (since they're measured on a tiny swarm of CPU servers, with low `--n_steps`).
4. Test `petals.cli.run_dht`.
5. Increase swap space and watch free RAM (a common issue is that actions are cancelled without explanation if there's not enough RAM - so it's a useful reminder + debug tool).
6. Fix flapping tests for bloom-560m by increasing tolerance.

Other minor changes: fix `--help` messages to show defaults, fix docs, tune rebalancing constants.
2023-08-08 19:10:27 +04:00
Alexander Borzunov
df6fdd2d0b
Force using --new_swarm instead of empty --initial_peers (#451)
This prohibits passing `--initial_peers` without arguments, since it's likely to be a side-effect from `--initial_peers $INITIAL_PEERS` with the env var not set.

Users should use `--new_swarm` for that, as explained in the private swarm tutorial.
2023-08-08 04:59:55 +04:00
Alexander Borzunov
2a150770a4
Prefer longer servers for fine-tuning, exclude unreachable (#448)
We choose longer servers to minimize the number of hops but leave some randomization to distribute the load. We also exclude servers known to be unreachable.
2023-08-07 21:43:21 +04:00
Alexander Borzunov
00d48dcbe1
Override float32 in config to bfloat16 (#431) 2023-08-07 19:47:22 +04:00