Commit Graph

223 Commits (fix-protobuf)
 

Author SHA1 Message Date
Aleksandr Borzunov b1183000d6 Update notebooks 2 years ago
Aleksandr Borzunov f7eb94afe0 Merge branch 'main' into fix-protobuf 2 years ago
Aleksandr Borzunov 9834a45402 Try one more protobuf conditions 2 years ago
Aleksandr Borzunov 8840230fb6 Try protobuf==3.19.0 2 years ago
Aleksandr Borzunov 1b51703444 Revert protobuf version change 2 years ago
Aleksandr Borzunov 208ecac300 Try to fix protobufs on Colab once again 2 years ago
Alexander Borzunov b26b0b7121
Require hivemind with fixed compression and protobuf working on Colab (#94) 2 years ago
Alexander Borzunov 8a73b41a42
Make ServerState announcements work better (#93)
- Before this PR, `ServerState.JOINING` was announced only once. This announcement quickly expires in case of the full-size BLOOM, since loading blocks takes several minutes. This PR fixes it, so `ServerState.JOINING` is announced periodically in a thread until blocks are loaded.

- This PR also makes the `Server` class a non-thread, so it runs in the main thread and can catch `KeyboardInterrupt`. This is important, since if we are downloading blocks right now, we need to stop it and send the `ServerState.OFFLINE` message. Note that `ModuleContainer` is still a thread.

- (minor) For the sake of readability, I moved the `ModuleContainer.create()` definition, so it is now defined before `Server.__init__()` (this is because `.create()` is invoked first).
2 years ago
Alexander Borzunov dc71574a63
Use public swarm by default (#92)
This PR makes servers and clients use public swarm's bootstrap peers if no other initial peers are specified.

If you'd like a server to start a new swarm, provide the `--new_swarm` CLI argument.
2 years ago
Alexander Borzunov 11d6ba683c
Make inference, forward, and backward fully fault-tolerant (#91) 2 years ago
Artem Chumachenko 695df826c2
Force reinstall for hivemind in example notebooks (#88) 2 years ago
Alexander Borzunov dc6ecccac5
Implement timeouts in forward/backward (#90) 2 years ago
Aleksandr Borzunov 4518d65fdd Add MIT license 2 years ago
Alexander Borzunov 898f614515
Fix floating point issues in block_selection.py (#89) 2 years ago
Alexander Borzunov c07a7e0812
Add "Terms of Use" 2 years ago
Artem Chumachenko 0d9c7de0bd
Add sst-2 ipynb example (#86)
- Add sst-2 example of a prompt-based training
- Have some enhancement in the persona-chat example
2 years ago
Alexander Borzunov 57e8d2e721
Implement exponential backoff for forward & backward (#85) 2 years ago
Alexander Borzunov ee4e69c254
Enable rebalancing by default (#84) 2 years ago
Artem Chumachenko 2cb82dd648
Add colab-related changes (#80)
Add some stuff to work on COLAB more comfortable.

Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
2 years ago
Alexander Borzunov 87fd6a4f08
Fix "Too many open files" during rebalancing (#83)
Now, the number of open files stays the same after every rebalancing.
2 years ago
Alexander Borzunov f64eb3a665
Update hivemind to 1.1.2, mark `model` argument as required (#81) 2 years ago
Alexander Borzunov 149f433763
Rebalance swarm when necessary (#34) 2 years ago
Alexander Borzunov 640bbc38a9
Make even smaller readability changes 2 years ago
Alexander Borzunov d1b012b479
Make small readability & style changes to the instructions (#77) 2 years ago
justheuristic fef48d7d99
Use bitsandbytes==0.34.0, update readme (#76)
* unlock bnb backward
* Fix bnb version in README
* Update requirements.txt
2 years ago
justheuristic 8caf1145a8
Quality of life changes: update readme, simplify run_server interface (#75)
- run_server now accepts model name as both positional and keyword argument
- changed names in README to account for interface updates
- moved model conversion from README to a separate wiki page
- updated requirements.txt
2 years ago
Artem Chumachenko 1046911dea
Add prompt tuning example on Personachat dataset (#69) 2 years ago
justheuristic 3fdcc55a56
fix protobuf version (#74)
* fix protobuf version
2 years ago
justheuristic e92487e5d2
Update dependency versions (#71)
* update dependency versions
* install bitsandbytes cpuonly from pip
* remove deprecated API from task pool
* clearer startup logs

Co-authored-by: Tim Dettmers <dettmers@cs.washington.edu>
2 years ago
Pavel Samygin 50535a8435
Priority tasks (#47)
* priority in handlers and backend pools
* simple points system on server side
* priortize task in handler before submit task
* fix tests
* s/expert/block/g

Co-authored-by: justheuristic <justheuristic@gmail.com>
2 years ago
justheuristic 892d18fea7
Build cpuonly from bitsandbytes main (#70)
Build cpuonly from main
2 years ago
justheuristic f3984b192a
Make attention cache wait until memory is freed (#53)
Previously, attempting to allocate with MemoryCache that does not have enough space would throw AllocationFailed.

PR changes this behavior to the following:
- by default, wait until memory is freed by other tenants (FIFO)
- if could not allocate within timeout, throw AllocationFailed
- if allocated size is too big to fit even in empty cache, throw AllocationFailed

- [x] passes existing tests
- [x] passes manual load tests

p.s. if anyone wondered: using mp.Condition will not make the code simpler, their lock behavior is slightly different to what we need here

Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2 years ago
justheuristic 8a0c056929
Fix calling rpc_info multiple times (#60)
call info once
2 years ago
Artem Chumachenko ada98a1b37
Add deep prompt inference (#66)
Add deep prompt in inference_step. Small refactoring in deep prompt code.
2 years ago
Alexander Borzunov 54ad745bed
Warn that current instructions involve 6B model but we will replace them soon (#63) 2 years ago
Alexander Borzunov 5f0c5329d4
Update readme with arxiv link and more discussions (#62)
Co-authored-by: justheuristic <justheuristic@gmail.com>
2 years ago
Alexander Borzunov 9bea7b9ea8
Update bullet points with feedback from Tim and other people (#61)
Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
2 years ago
Alexander Borzunov 7653562aa1
Use latest version of Petals scheme, shrink Petals logo (#59) 2 years ago
Alexander Borzunov 2eb5843852
Update readme for the 1st public release (#57) 2 years ago
Pavel Samygin 0be21775af
remove transformer block, implement as sequential of size 1 (#54)
* remove transformer block, implement as sequence size 1
* reimplement get_remote_module
* fix readme

Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2 years ago
Artem Chumachenko 77220c718c
Add shallow prefix-tuned inference (#55)
* Add prefix-tuned inference

* Add prefix-tuned inference

* Add preseq_length in prefix size
2 years ago
justheuristic d271b75dd4
Let users specify sequence length instead of assuming 2048 (#52)
- Maximum length is now provided in `.inference_session(max_length=100)`
   - previously, we would always assume max length = 2048
- added a generic way to forward **kwargs to inference session
  - for compatibility with #47 
  - Note to @borzunov : it does *not* pass them arbitrarily, but instead checks for kwarg names at the bottom level
- run_server can be started with a custom max_length for inference
- renamed --cache_size_bytes to --attention_cache_bytes (to avoid collision with --cache_dir)
- --attn_cache_bytes can now support humane file sizes (e.g. 300MB instead of 314572800)
- made some server-side errors more human-readable to user (e.g. when max length is exceeded)

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
2 years ago
Dmitry Baranchuk 948877149c
Fix recovering for sequential_backward (#50) 2 years ago
Dmitry Baranchuk 24ba3433e4
[Fix] make distributed seq cls to not create the full bloom model (#49) 2 years ago
justheuristic f12d0deee9
[quickfix 1/n] remove expensive assertions in inference code (#48)
remove expensive assertions in inference code
2 years ago
Dmitry Baranchuk 0fd2caa4be
Convert actual model weights (#46) 2 years ago
justheuristic a2634001e9
Reduce vocabulary size in test model, fix bug in routing when overlapped (#45)
This PR reduces this vocabulary size to save memory during conversion, keeping only the first 50k tokens
As a result, 

* tests that load client-side embeddings need significantly less RAM
* we can now run CI tests with 4 servers instead of 2 - needed to test routing - see bugs uncovered
* some of the servers now use load balancing
* CI convert_model now takes 4-5 minutes (was 6-7)
2 years ago
Dmitry Baranchuk 5745882c67
fix rpc_forward_stream 2 years ago
Dmitry Baranchuk 6095f58681
Deep distributed prompt tuning (#42)
* implemented an option to add learnable prompts to intermediate layers
* added support for prompts (as input) in rpc_forward and rpc_backward
* added a test to check that RemoteSequential works correctly with deep prompts

Co-authored-by: justheuristic <justheuristic@gmail.com>
2 years ago
justheuristic 9460220a10
make pytest outputs more verbose (#44)
this PR adds --verbose and --duration* to pytest
2 years ago