Commit Graph

509 Commits (cc4fe17a9906034e9c59ba6a33cdde3af226b6b5)
 

Author SHA1 Message Date
Your Name cc4fe17a99 minimize diff 9 months ago
Your Name 6c7f762379 rollback: only generic kwarg 9 months ago
Your Name 6256995bb1 Merge remote-tracking branch 'origin/main' into forward_kwargs 9 months ago
Max Ryabinin 1ebd88ae7b
Optimize the Falcon block for inference (#500)
This PR attempts to optimize the inference of Falcon models in the single-token setup by reducing the majority of Python overhead and making several assumptions about the setup. Specifically,

* Layer normalization, QKV projection (with splitting) and rotary embeddings are executed through CUDA graphs, which reduces most overhead related to small kernel launche
* If no sin/cos tensors are cached by the rotary embedding layer, we cache them for 8192 tokens (INFERENCE_MAX_LENGTH) during the first forward pass. In general, it should be beneficial to always run a max-length sequence before starting a block, but this is a question for another PR

The PR also adds a small test to ensure that the results (without quantization) of the block before and after quantization indeed match.

Lastly, the pull request makes the backward pass work (as discussed in https://github.com/bigscience-workshop/petals/pull/499) by making cached sin/cos for RotaryEmbedding into buffers and disabling the inference mode during their creation.
9 months ago
Alexander Borzunov d40eb6c701
Fix prompt tuning after #464 (#501)
Unfortunately, running inference in models with `"ptune" in config.tuning_mode` was broken after #464.
9 months ago
Alexander Borzunov dd4a3230bc
Add Falcon support (#499)
This PR adds:

- Support for models based on `transformers.FalconModel` (the in-library format for Falcon). Tested on Falcon-40B.
- CI tests for Falcon-RW-1B.
- `--throughput dry_run` option to evaluate throughput and exit right away (implemented by @mryab).

Limitations:

- Backward pass support is broken for now, will be fixed in #500.

Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
9 months ago
Alexander Borzunov b4d822afb2
Force use_cache=True in config only (#497)
This reverts a part of #496 and instead overrides `use_cache` in `LlamaConfig`s only (so the correct value is visible by HF `.generate()` as well).
9 months ago
Alexander Borzunov abd547735f
Force use_cache=True (#496) 9 months ago
justheuristic ce89b649b5
Merge branch 'main' into forward_kwargs 9 months ago
Alexander Borzunov 6ef6bf5fa2
Create model index in DHT (#491)
This PR creates an index of models hosted in the swarm - it is useful to know which custom models users run and display them at https://health.petals.dev as "not officially supported" models.
9 months ago
Alexander Borzunov 6bb3f54e39
Replace dots in repo names when building DHT prefixes (#489) 9 months ago
Alexander Borzunov 02fc71eb25
Fix race condition in MemoryCache (#487) 9 months ago
Alexander Borzunov dc0072fde1
Wait for DHT storing state OFFLINE on shutdown (#486) 9 months ago
Alexander Borzunov a26559ff65
Fix `.generate(input_ids=...)` (#485) 9 months ago
Alexander Borzunov 459933f846
Remove no-op process in PrioritizedTaskPool (#484)
Please revert this if you ever need to make `PrioritizedTaskPool` a process again.
9 months ago
Alexander Borzunov 26ebbfe8f0
Support macOS (#477)
This PR makes both clients and servers work on macOS. Specifically, it:

- Follows https://github.com/learning-at-home/hivemind/pull/586 to run a macOS-compatible `p2pd` binary (both x86-64 and ARM64 are supported)
- Fixes forking issues and tests on macOS, Python 3.10+
- Introduces basic support for serving model blocks on Apple M1/M2 GPUs (torch.mps)
- Increases max number of open files by default (it's not enough on Linux and is really small on macOS)
9 months ago
Alexander Borzunov 75e516a8c1
Refactor readme (#482) 9 months ago
justheuristic c08d09c4d3
Rewrite MemoryCache alloc_timeout logic (#434)
-    rpc_inference: server will now accept allocation timeout from user, defaults to no timeout
-    bugfix: inference timeout is now measured from the moment the request is received
    -    previously, you would have to wait for your timeout plus the time it takes to sort through the queue (other users' timeout)
    -    now, you get AllocationFailed if you had to wait for over (timeout) seconds - regardless of other users
-    a request for inference with no timeout will now fail instantly if there is not enough memory available
-    dtype number of bytes is now correctly determined for int, bool & other types


---------

Co-authored-by: Your Name <you@example.com>
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Aleksandr Borzunov <hxrussia@gmail.com>
9 months ago
Your Name 49ff759d30 undo 9 months ago
Your Name 84ebd57105 WIP, switching to another PR 9 months ago
Alexander Borzunov 90840dfea2
Fix requiring transformers>=4.32.0 (#480) 9 months ago
Your Name 09e9da6eb1 serialize outputs structure 9 months ago
Alexander Borzunov 915b357740
Require transformers>=4.32.0 (#479)
It's necessary to load https://huggingface.co/petals-team/StableBeluga2 since it doesn't have deprecated `inv_freq` weights.
9 months ago
Alexander Borzunov 18e93afc73
Don't install cpufeature on non-x86_64 machines (#478)
Necessary since cpufeature crashes when installing on ARM.
9 months ago
Alexander Borzunov 6967904590
Bump version to 2.1.0 (#474)
* Bump version to 2.1.0
* Suggest using resharded repo
* LLaMA -> Llama in readme
9 months ago
justheuristic 22bcbb34fd
Merge branch 'main' into forward_kwargs 9 months ago
Alexander Borzunov df8ab09ca2
Hide excess key message (#476)
Before:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0, _IncompatibleKeys(missing_keys=[], unexpected_keys=['self_attn.rotary_emb.inv_freq'])
```

After:

```python
Aug 23 23:51:31.394 [INFO] Loaded Maykeye/TinyLLama-v0 block 0
```

Hiding this since the excess keys in Llama-based models are okay since the latest transformers release.
9 months ago
Artem Chumachenko a14ae7334d
Update peft to 0.5.0 version (#475)
Update peft to 0.5.0
9 months ago
Alexander Borzunov a9b0e9ff1a
Support loading weights from Safetensors on server (#473) 9 months ago
justheuristic 4f850996bb
Change transformers version assert (#472) 9 months ago
justheuristic 9250025140
Support transformers 4.32.x (#471) 9 months ago
justheuristic adda5f8c20
Temporarily require peft<0.5.0, transformers<4.32.0 (#470)
Peft 0.5 recently released and broke some compatilibities. This PR temporarily requires petals to use the previous stable version of peft while we work on 0.5.0 support.
9 months ago
justheuristic 1e5df2916e
Merge branch 'main' into forward_kwargs 9 months ago
Your Name d51c08ef20 undo debug change 9 months ago
Your Name 4529471f3f black, isort 9 months ago
Your Name 13c13d347a wip (again) 9 months ago
Your Name 65e87395bc wip (again) 9 months ago
Alexander Borzunov de2475f31c
Make client compatible with transformers' GenerationMixin (#464)
This PR drops custom generation codes and introduces compatibility with `transformers.GenerationMixin` instead. This includes support for more sampling options (`top_p`, `top_k`, `repetition_penalty` requested in #460) and beam search - all that is now identical to running model with transformers locally.

Most features (excluding beam search and other rarely used stuff) are also compatible with resuming existing sessions.

### Breaking changes

If `.generate()` or forward passes are being run inside an `.inference_session()` context, they now use the opened session by default. So, these snippets are now equivalent:

```python
# Using default session
with model.inference_session(max_length=100):
    output_ids = model.generate(input_ids, max_new_tokens=3)

# Explicitly specifying a session
with model.inference_session(max_length=100) as sess:
    output_ids = model.generate(input_ids, max_new_tokens=3, session=sess)
```

Earlier, the 1st snippet was creating a new session, which is not what most people expected (= such code was most likely to introduce a bug, which is now fixed).
10 months ago
Your Name 084d565845 priority pool 10 months ago
Your Name 355c1509e1 black-isort 10 months ago
Your Name fb9b21132c black-isort 10 months ago
Your Name ed8d7f41b8 mwp 10 months ago
Your Name f313730767 WIP 10 months ago
Your Name 1879788705 typos 10 months ago
Alexander Borzunov 063e94b4c8
Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 10 months ago
Artem Chumachenko 568f21dc3b
Add customizable input tensors (#445) 10 months ago
Alexander Borzunov 329f7d31e8
Add `blocked_servers` argument (#462)
Should be used as:

```python
model = AutoDistributedModelForCausalLM(model_name, blocked_servers=[peer_id1, peer_id2])
```
10 months ago
Alexander Borzunov 722c4dc496
Bump version to 2.0.1.post2 (#459) 10 months ago
Alexander Borzunov 056f22515a
Prioritize short inference, unmerge pools for long inference (#458)
Right now, long inference requests may occupy Runtime for a few seconds without giving it away to process short (most latency-sensitive requests). This PR fixes it by disallowing the merged pool for long requests and prioritizing the short ones.
10 months ago
justheuristic 55eb36ef48
Fix missing torch.cuda.synchronize for computing throughput (#456) 10 months ago