Commit Graph

16 Commits

Author SHA1 Message Date
Alexander Borzunov
fef7257fe0
Try to fix protobuf versions once again (#95)
The goals of these changes are:

- Make Petals work in Colab right after just doing `pip install -r requirements.txt`
- Make tests work independently of the protobuf package version chosen while installing dependencies
2022-11-28 12:18:03 +04:00
Aleksandr Borzunov
1b51703444 Revert protobuf version change 2022-11-28 07:19:54 +00:00
Alexander Borzunov
b26b0b7121
Require hivemind with fixed compression and protobuf working on Colab (#94) 2022-11-28 10:51:37 +04:00
Alexander Borzunov
11d6ba683c
Make inference, forward, and backward fully fault-tolerant (#91) 2022-11-27 04:11:54 +04:00
Alexander Borzunov
57e8d2e721
Implement exponential backoff for forward & backward (#85) 2022-11-02 01:21:15 +04:00
Alexander Borzunov
f64eb3a665
Update hivemind to 1.1.2, mark model argument as required (#81) 2022-10-26 03:23:18 +04:00
justheuristic
fef48d7d99
Use bitsandbytes==0.34.0, update readme (#76)
* unlock bnb backward
* Fix bnb version in README
* Update requirements.txt
2022-09-20 13:07:34 +03:00
justheuristic
8caf1145a8
Quality of life changes: update readme, simplify run_server interface (#75)
- run_server now accepts model name as both positional and keyword argument
- changed names in README to account for interface updates
- moved model conversion from README to a separate wiki page
- updated requirements.txt
2022-09-20 03:51:57 +03:00
justheuristic
3fdcc55a56
fix protobuf version (#74)
* fix protobuf version
2022-09-18 04:54:08 +03:00
justheuristic
e92487e5d2
Update dependency versions (#71)
* update dependency versions
* install bitsandbytes cpuonly from pip
* remove deprecated API from task pool
* clearer startup logs

Co-authored-by: Tim Dettmers <dettmers@cs.washington.edu>
2022-09-13 03:51:15 +03:00
justheuristic
d271b75dd4
Let users specify sequence length instead of assuming 2048 (#52)
- Maximum length is now provided in `.inference_session(max_length=100)`
   - previously, we would always assume max length = 2048
- added a generic way to forward **kwargs to inference session
  - for compatibility with #47 
  - Note to @borzunov : it does *not* pass them arbitrarily, but instead checks for kwarg names at the bottom level
- run_server can be started with a custom max_length for inference
- renamed --cache_size_bytes to --attention_cache_bytes (to avoid collision with --cache_dir)
- --attn_cache_bytes can now support humane file sizes (e.g. 300MB instead of 314572800)
- made some server-side errors more human-readable to user (e.g. when max length is exceeded)

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
2022-08-29 21:04:37 +03:00
Dmitry Baranchuk
11a424837f
integrate mixed-8bit model (#39)
* integrate mixed-8bit model
* Fix bug with model duplication in RAM
* set throughput=1.0 to fix zero throughput problem
* add revision support
* update hivemind and bitsandbytes
* update deploy scripts
* update installation instructions
2022-08-04 09:57:37 +03:00
Dmitry Baranchuk
04a2b6f5e3
Support various backend dtypes & async serialization (#38) 2022-07-28 18:33:58 +03:00
justheuristic
e2711a033b
Add automated tests (#23)
This PR will run basic tests automatically on each subsequent PR

- convert a small model on every PR
- run existing tests on every PR
- enforce black / isort
- require checks on merge
- make sure tests are not flappy

Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
Co-authored-by: Dmitry Baranchuk <dmitrybaranchuk@gmail.com>
2022-07-16 01:59:23 +03:00
justheuristic
99059ae667 install script 2022-06-12 04:23:38 +03:00
justheuristic
b370b43110 freeze hivemind and transformers versions 2022-06-12 03:18:53 +03:00