Commit Graph

541 Commits (forward_kwargs)
 

Author SHA1 Message Date
Your Name 3195579620 Merge remote-tracking branch 'origin/main' into forward_kwargs
# Conflicts:
#	src/petals/__init__.py
#	src/petals/client/inference_session.py
6 months ago
justheuristic d59c15c578
Bump version for inference diagnostics (#543)
bump version for inference diagnostics
6 months ago
Max Ryabinin 03cbe90234
Optimize LLaMA for inference (#513)
* Optimize LLaMa for inference
* Fix model type detection in tests
6 months ago
justheuristic 25a0796b39
Hotfix: require peft version 0.5.0 (#539)
Peft: strict version check for now

Co-authored-by: horik <hr.mail.qaq@gmail.com>
6 months ago
justheuristic dcce43670f
Hotfix: set transformers version <=4.34 temporarily (#538)
* fix transformers version for now


Co-authored-by: horik <hr.mail.qaq@gmail.com>
6 months ago
Alexander Borzunov 82a97d6e9e
Fix beam search in GPU clients (#531)
Fixes #503.
7 months ago
Alexander Borzunov 47d50e1e29
Improve default arguments for clients and servers (#530)
This PR updates multiple default arguments in clients and servers:

1. **The client defaults to `torch_dtype=torch.float32` instead of `torch_dtype="auto"`.**

    The old default was to load weights in the dtype they are saved in (usually bfloat16/float16), which caused issues when the client was run on CPU (the default unless you call `.cuda()`). Specifically, bfloat16 is slow on most CPUs (unless a CPU supports AVX512) and float16 can't be run natively and leads to an exception. This default was a legacy of the earliest Petals versions designed to run BLOOM - its embeddings were so big that they didn't fit into RAM in float32 (e.g., in Colab). The newer models don't have this issue.

    In contrast, the new default leads to good speed on all CPUs and is consistent with PyTorch and HF Transformers. Also, the client now shows "bfloat16 on non-AVX512 CPU" in all cases (previously this warning was shown only if the machine has enough RAM to fit float32 weights, which could hide the crucial reason of inference being slow).

    **Note:** This change is backward-incompatible, so we have to increase at least the minor package version (2.2.0 -> 2.3.0.dev0).

2. **The server uses 2x smaller `--attn_cache_tokens`.**

    The old default led to loading 39 (out of 80) or 78 (out of 80) blocks for popular models on some GPU types, which visibly slowed down inference due to an excess network hop. It was also leaving too much cache, so that inference slowed down much before the cache is used.

    The new default leads to more efficient block layouts and makes the inference routing algorithm choose alternative paths through other servers when a particular server already has enough active inference sessions (= its cache is full).

3. **The client's max number of retries can be limited by the `PETALS_MAX_RETRIES` env var.**

    This is to limit `ClientConfig.max_retries` in tests, so we see tracebacks instead of retrying indefinitely in case of errors.
7 months ago
Max Ryabinin ae19b65095
Add position_ids argument to DistributedFalconModel (#525) 7 months ago
Alexander Borzunov 1d9401ddce
Update README.md (#520) 8 months ago
FYY a2484b3053
Fix file locks in NFS-mounted directories (#517)
Fix #515.
8 months ago
Alexander Borzunov 5ce4f1a159
Store (start_block, end_block) in each DHT record for reliability (#510)
This PR fixes gaps in the DHT server info caused by unavailable DHT keys. Now, one DHT key is enough to get info about all blocks hosted by a server - so we'll see info until all keys are unavailable.

Also, this PR refactors `petals.client.routing` and `petals.server.block_selection` modules to use the common `compute_spans()` function (defined in `petals.utils.dht`) and `RemoteSpanInfo` class (defined in `petals.data_structures`).
8 months ago
Alexander Borzunov 158621677b
Bump version to 2.2.0 (#502) 8 months ago
Your Name c665c42cf2 reduce diff 8 months ago
Your Name 3f06b53b1d temporary rollback: allow kwargs only at first inference step 8 months ago
Your Name 3048c3b3ad rollback 8 months ago
Your Name 721f7d2db3 unbreak everything 8 months ago
Your Name 3bffcde0fe black+isort 8 months ago
Your Name 8eb1722f1e standardize: s/backend_kwargs/block_kwargs/g everywhere 8 months ago
Your Name 68b8cea246 note 8 months ago
Your Name a23bd73f3b probably break everyting 8 months ago
Your Name 056cd77f11 standardize checking block_kwargs 8 months ago
Your Name aacd8b2f9d pass args/kwargs via forward 8 months ago
Your Name 62e780c054 check num block kwargs 8 months ago
Your Name 17d278e88a black-isort-clarify 8 months ago
Your Name 9e29140bb0 mention reference issue 8 months ago
Your Name b7bd4770d7 black-isort 8 months ago
Your Name f2049658b6 make it work for fwd, bwd 8 months ago
Your Name 465fd93147 more WIP 8 months ago
Your Name 4393d99e78 1isort 8 months ago
Your Name 49474e5477 wip some more 8 months ago
Your Name e5c2d8eca4 WIP BEFORE MEETING NEED BACKWARD UPDATE 8 months ago
Your Name 2e760319ab add docstr 8 months ago
Your Name cc4fe17a99 minimize diff 8 months ago
Your Name 6c7f762379 rollback: only generic kwarg 8 months ago
Your Name 6256995bb1 Merge remote-tracking branch 'origin/main' into forward_kwargs 8 months ago
Max Ryabinin 1ebd88ae7b
Optimize the Falcon block for inference (#500)
This PR attempts to optimize the inference of Falcon models in the single-token setup by reducing the majority of Python overhead and making several assumptions about the setup. Specifically,

* Layer normalization, QKV projection (with splitting) and rotary embeddings are executed through CUDA graphs, which reduces most overhead related to small kernel launche
* If no sin/cos tensors are cached by the rotary embedding layer, we cache them for 8192 tokens (INFERENCE_MAX_LENGTH) during the first forward pass. In general, it should be beneficial to always run a max-length sequence before starting a block, but this is a question for another PR

The PR also adds a small test to ensure that the results (without quantization) of the block before and after quantization indeed match.

Lastly, the pull request makes the backward pass work (as discussed in https://github.com/bigscience-workshop/petals/pull/499) by making cached sin/cos for RotaryEmbedding into buffers and disabling the inference mode during their creation.
8 months ago
Alexander Borzunov d40eb6c701
Fix prompt tuning after #464 (#501)
Unfortunately, running inference in models with `"ptune" in config.tuning_mode` was broken after #464.
8 months ago
Alexander Borzunov dd4a3230bc
Add Falcon support (#499)
This PR adds:

- Support for models based on `transformers.FalconModel` (the in-library format for Falcon). Tested on Falcon-40B.
- CI tests for Falcon-RW-1B.
- `--throughput dry_run` option to evaluate throughput and exit right away (implemented by @mryab).

Limitations:

- Backward pass support is broken for now, will be fixed in #500.

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
9 months ago
Alexander Borzunov b4d822afb2
Force use_cache=True in config only (#497)
This reverts a part of #496 and instead overrides `use_cache` in `LlamaConfig`s only (so the correct value is visible by HF `.generate()` as well).
9 months ago
Alexander Borzunov abd547735f
Force use_cache=True (#496) 9 months ago
justheuristic ce89b649b5
Merge branch 'main' into forward_kwargs 9 months ago
Alexander Borzunov 6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
9 months ago
Alexander Borzunov 6bb3f54e39
Replace dots in repo names when building DHT prefixes (#489) 9 months ago
Alexander Borzunov 02fc71eb25
Fix race condition in MemoryCache (#487) 9 months ago
Alexander Borzunov dc0072fde1
Wait for DHT storing state OFFLINE on shutdown (#486) 9 months ago
Alexander Borzunov a26559ff65
Fix `.generate(input_ids=...)` (#485) 9 months ago
Alexander Borzunov 459933f846
Remove no-op process in PrioritizedTaskPool (#484)
Please revert this if you ever need to make `PrioritizedTaskPool` a process again.
9 months ago
Alexander Borzunov 26ebbfe8f0
Support macOS (#477)
This PR makes both clients and servers work on macOS. Specifically, it:

- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
9 months ago
Alexander Borzunov 75e516a8c1
Refactor readme (#482) 9 months ago
justheuristic c08d09c4d3
Rewrite MemoryCache alloc_timeout logic (#434)
-    rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
-    bugfix: inference timeout is now measured from the moment the request is received
    -    previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
    -    now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
-    a request for inference with no timeout will now fail instantly if there is not enough memory available
-    dtype number of bytes is now correctly determined for int, bool & other types


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
9 months ago