Commit Graph

286 Commits (34644f13e12845bf21072c9d51350692ce4e9af5)
 

Author SHA1 Message Date
Max Ryabinin 34644f13e1
Downgrade CUDA in Docker image to 11.0.3 (#145)
* Downgrade CUDA in Docker image to 11.0.3

* Remove development deps from the image
1 year ago
Artem Chumachenko 7911c2641d
Update advanced notebooks (#148)
Update examples

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
1 year ago
Alexander Borzunov 668b736031
Fix logging: do not duplicate lines, enable colors in Colab (#156) 1 year ago
Alexander Borzunov 041ad20891
Check reachability automatically and give advice how to fix it (#155)
1. If we connect to the **public swarm**, the server now **automatically checks its DHT's reachability** from the outside world using API at http://health.petals.ml This is important to disallow unreachable servers to proceed (they create issues for the clients, such as repetitive retries).

    If http://health.petals.ml is down, the server proceeds without the check (so we don't depend on it). However, if health.petals.ml is up and explicitly tells us that we are unrechable, the server shows the reason of that and how to solve it.

    The check may be disabled with the `--skip_reachability_check` option (though I can't imagine cases where someone needs to use it).

2. Added `--port` and `--public_ip` as **simplified convenience options** for users not familiar with `--host_maddrs` and `--announce_maddrs`.
1 year ago
Alexander Borzunov 73df69a117
Reset MemoryCache during rebalancings (#154)
Before this PR, if there were open inference sessions right when rebalancing is triggered, their cache was never properly destroyed.
1 year ago
Max Ryabinin bd91be27ea
Add missing methods for SamplingAlgorithm, fix docstrings (#107)
* Add missing methods for SamplingAlgorithm, fix docstrings

* Add SamplingAlgorithm to _choose_sample_algorithm

* Add test_sampling

* Add a warning if sampling options were passed, but do_sample=False

* Skip the sampling test for now

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
1 year ago
Max Ryabinin a0e8bbd28d
Fix arguments in remove_old_models.py (#153)
* Fix arguments in remove_old_models.py

* Remove unnecessary args.author

* Fix the GitHub Action as well
1 year ago
Alexander Borzunov 701ec7e53e
Clean up disk space (#152) 1 year ago
justheuristic b04982c1a2
Bump transformers to 4.25.1 (#151)
- latest accelerate, transformers, huggingface_hub
- rearrange attention caches to support https://github.com/huggingface/transformers/pull/18344
- remove unused code
- fix edge case where session crashes when receiving seq length 0
- assert transformer version when importing WrappedBloomBlock

Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Max Ryabinin <mryabinin0@gmail.com>
1 year ago
Alexander Borzunov e4dc938dfe
Fix OOMs during server rebalancing (#150)
The cause of OOMs were the cyclic references `TransformerBackend <-> PrioritizedTaskPool` that could not have been garbage collected properly. Still, I've added explicit tensor removal just in case.
1 year ago
Alexander Borzunov 83d9493b6c
Improve block size calculations (#149) 1 year ago
Aleksandr Borzunov f42e559c77 Update README.md 1 year ago
Alexander Borzunov 6beb686909
Add link to privacy & security Wiki (#144) 1 year ago
Alexander Borzunov 84fec81543
Suppress asyncio error logs by default (#142) 1 year ago
Alexander Borzunov e99bf36647
Use common folder for all caches, make it a volume in Dockerfile (#141) 1 year ago
Alexander Borzunov 5f50ea9c79
Update Anaconda instructions (#140) 1 year ago
Alexander Borzunov e1d8793f00
Show route on client (#139) 1 year ago
Alexander Borzunov 4cb0ac4718
Update texts in "Terms of use" and "Privacy and security" sections (#138) 1 year ago
Alexander Borzunov a94c91d870
Add Docker commands, use permanent Discord links (#137) 1 year ago
Alexander Borzunov 77a00e17f0
Fix "could not unlink the shared memory file" during rebalancing (#135) 1 year ago
Alexander Borzunov 318d690a5c
Fix waiting until free memory is available (#136) 1 year ago
Alexander Borzunov e8fac92e59
Allow .generate() to reuse existing inference session (#132) 1 year ago
Alexander Borzunov 1fe3716589
Don't ban servers in case of client-caused handler errors (#134) 1 year ago
Alexander Borzunov 66f1799d32
Set default --step_timeout to 5 min (#133) 1 year ago
Alexander Borzunov b873d92ffa
Update README.md 1 year ago
Alexander Borzunov 5d5d2666b8
Mention parallel inference 1 year ago
Alexander Borzunov 955eae30b3
Mention 1 sec/token explicitly 1 year ago
Alexander Borzunov 33c210b973
Update Colab notebook 1 year ago
Alexander Borzunov f56edaa13f
Fix inference and rpc_info() fault tolerance (#131) 1 year ago
justheuristic 79a4308992
Clear trigger before engaging in update (#130)
Update sequence_manager.py
1 year ago
Alexander Borzunov b8e1c1b7f5
Revert to hivemind==1.1.3 for stability (#129) 1 year ago
justheuristic 68c85e7492
Avoid synchronous updates, ban peers based on request outcome (#127)
- sequence_manager now takes care for its own updated-ness - no need to manually update it
- if a peer fails a request, sequence manager will ban this peer temporarily. Ban times increase with failure streaks



Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
1 year ago
Alexander Borzunov 9dbf5e2e6f
Set dht.num_workers = n_layer, update_period = 150, expiration = 300 (#125) 1 year ago
Max Ryabinin 3ca8b4f082
Fix typos with codespell (#126) 1 year ago
justheuristic 8491ed2bd3
Add checks for forward() inputs on the client side (#123) 1 year ago
Max Ryabinin 055f85b83e
Call block.load_state_dict only once (#124) 1 year ago
Artem Chumachenko 0855aa7347
Update notebooks to use full BLOOM-176B (#104)
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
1 year ago
Max Ryabinin 4ffb4d83c7
Remove "-r" when installing Petals in examples (#122) 1 year ago
Alexander Borzunov d29ef70c85
Update README.md 1 year ago
Alexander Borzunov 1d9aa77697
Update README.md 1 year ago
Alexander Borzunov da36470a4b
Update README.md 1 year ago
Alexander Borzunov 81b94df14b
Rework readme, move code example to the top, link draft of Colab (#118) 1 year ago
Alexander Borzunov 893987ebf8
Require hivemind==1.1.4 with p2pd v0.3.13 (#121) 1 year ago
Alexander Borzunov fc6722576b
Choose --num_blocks for bigscience/bloom-petals automatically (#119) 1 year ago
Alexander Borzunov f72c220404
Suppress quantization warning and fix dtype defaults in compute benchmark (#117) 1 year ago
Alexander Borzunov 643a054170
Make server use smart defaults (#115)
Summary:

```python
parser.add_argument('--attn_cache_size', type=str, default=None,
                    help='The size of GPU memory allocated for storing past attention keys/values between inference steps. '
                         'Examples: 500MB, 1.2GB, 1073741824 (bytes). Note that 1KB != 1KiB here. '
                         'Default: 0.5GiB * num_blocks * hidden_size / 14336. '
                         'The latter is the hidden size of the bigscience/bloom-petals model.')

parser.add_argument('--request_timeout', type=float, required=False, default=3 * 60,
                    help='Timeout (in seconds) for the whole rpc_forward/rpc_backward/rpc_forward_stream/rpc_backward_stream request')
parser.add_argument('--session_timeout', type=float, required=False, default=30 * 60,
                    help='Timeout (in seconds) for the whole inference session')
parser.add_argument('--step_timeout', type=float, required=False, default=60,
                    help="Timeout (in seconds) for waiting the next step's inputs inside an inference session")

parser.add_argument('--load_in_8bit', type=bool, default=None,
                    help="Convert the loaded model into mixed-8bit quantized model. Default: True if GPU is available")
```

Co-authored-by: justheuristic <justheuristic@gmail.com>
1 year ago
justheuristic 9e11f73242
Fix tile size on ampere (#116)
Fix tile size on ampere

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
1 year ago
justheuristic 617d70f7dc
Support --load_in_8bit on pre-Turing GPUs (#113)
- Linear8bitLt now supports for pre-turing GPUs by temporarily upcasting quantized weights.
- added a test for linear8bitlt accuracy with the new fallback, the accuracy is similar than the real thing, (slightly better due to non-quantized A)
- performance is roughly halfway between the default mode and memory_efficient_backward

Alternatives considered:
- cupy - slow, casting to float internally
- triton - fast but unstable af. every 3rd attempt to matmul is a segfault
- bnb.functional.igemm (no lt) - "CuBLAS Error 8" on old GPUs

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
1 year ago
Alexander Borzunov 1ea44b0d3c
Measure throughput for different configs, devices, and dtypes separately (#114) 1 year ago
justheuristic 01838f9a99
Fix Linear8bitlt state config, update tests (#112)
* fix state initializer
* update tests to actually use new code
* keep bias during quantization
1 year ago