Commit Graph

363 Commits

Author SHA1 Message Date
Alexander Borzunov
66f1799d32
Set default --step_timeout to 5 min (#133) 2022-12-05 13:44:18 +04:00
Alexander Borzunov
b873d92ffa
Update README.md 2022-12-04 22:48:51 +04:00
Alexander Borzunov
5d5d2666b8
Mention parallel inference 2022-12-04 22:48:32 +04:00
Alexander Borzunov
955eae30b3
Mention 1 sec/token explicitly 2022-12-04 22:10:15 +04:00
Alexander Borzunov
33c210b973
Update Colab notebook 2022-12-03 20:38:18 +04:00
Alexander Borzunov
f56edaa13f
Fix inference and rpc_info() fault tolerance (#131) 2022-12-03 19:28:15 +04:00
justheuristic
79a4308992
Clear trigger before engaging in update (#130)
Update sequence_manager.py
2022-12-03 17:42:52 +03:00
Alexander Borzunov
b8e1c1b7f5
Revert to hivemind==1.1.3 for stability (#129) 2022-12-03 17:36:05 +04:00
justheuristic
68c85e7492
Avoid synchronous updates, ban peers based on request outcome (#127)
- sequence_manager now takes care for its own updated-ness - no need to manually update it
- if a peer fails a request, sequence manager will ban this peer temporarily. Ban times increase with failure streaks



Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2022-12-03 16:13:15 +03:00
Alexander Borzunov
9dbf5e2e6f
Set dht.num_workers = n_layer, update_period = 150, expiration = 300 (#125) 2022-12-03 15:26:57 +04:00
Max Ryabinin
3ca8b4f082
Fix typos with codespell (#126) 2022-12-03 14:19:37 +03:00
justheuristic
8491ed2bd3
Add checks for forward() inputs on the client side (#123) 2022-12-03 15:02:48 +04:00
Max Ryabinin
055f85b83e
Call block.load_state_dict only once (#124) 2022-12-03 15:01:56 +04:00
Artem Chumachenko
0855aa7347
Update notebooks to use full BLOOM-176B (#104)
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2022-12-03 14:09:21 +04:00
Max Ryabinin
4ffb4d83c7
Remove "-r" when installing Petals in examples (#122) 2022-12-03 11:21:45 +04:00
Alexander Borzunov
d29ef70c85
Update README.md 2022-12-03 01:14:02 +04:00
Alexander Borzunov
1d9aa77697
Update README.md 2022-12-03 00:46:52 +04:00
Alexander Borzunov
da36470a4b
Update README.md 2022-12-03 00:46:08 +04:00
Alexander Borzunov
81b94df14b
Rework readme, move code example to the top, link draft of Colab (#118) 2022-12-03 00:17:57 +04:00
Alexander Borzunov
893987ebf8
Require hivemind==1.1.4 with p2pd v0.3.13 (#121) 2022-12-03 00:16:14 +04:00
Alexander Borzunov
fc6722576b
Choose --num_blocks for bigscience/bloom-petals automatically (#119) 2022-12-02 23:17:44 +04:00
Alexander Borzunov
f72c220404
Suppress quantization warning and fix dtype defaults in compute benchmark (#117) 2022-12-02 20:07:28 +04:00
Alexander Borzunov
643a054170
Make server use smart defaults (#115)
Summary:

```python
parser.add_argument('--attn_cache_size', type=str, default=None,
                    help='The size of GPU memory allocated for storing past attention keys/values between inference steps. '
                         'Examples: 500MB, 1.2GB, 1073741824 (bytes). Note that 1KB != 1KiB here. '
                         'Default: 0.5GiB * num_blocks * hidden_size / 14336. '
                         'The latter is the hidden size of the bigscience/bloom-petals model.')

parser.add_argument('--request_timeout', type=float, required=False, default=3 * 60,
                    help='Timeout (in seconds) for the whole rpc_forward/rpc_backward/rpc_forward_stream/rpc_backward_stream request')
parser.add_argument('--session_timeout', type=float, required=False, default=30 * 60,
                    help='Timeout (in seconds) for the whole inference session')
parser.add_argument('--step_timeout', type=float, required=False, default=60,
                    help="Timeout (in seconds) for waiting the next step's inputs inside an inference session")

parser.add_argument('--load_in_8bit', type=bool, default=None,
                    help="Convert the loaded model into mixed-8bit quantized model. Default: True if GPU is available")
```

Co-authored-by: justheuristic <justheuristic@gmail.com>
2022-12-02 17:36:39 +04:00
justheuristic
9e11f73242
Fix tile size on ampere (#116)
Fix tile size on ampere

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2022-12-02 16:16:42 +03:00
justheuristic
617d70f7dc
Support --load_in_8bit on pre-Turing GPUs (#113)
- Linear8bitLt now supports for pre-turing GPUs by temporarily upcasting quantized weights.
- added a test for linear8bitlt accuracy with the new fallback, the accuracy is similar than the real thing, (slightly better due to non-quantized A)
- performance is roughly halfway between the default mode and memory_efficient_backward

Alternatives considered:
- cupy - slow, casting to float internally
- triton - fast but unstable af. every 3rd attempt to matmul is a segfault
- bnb.functional.igemm (no lt) - "CuBLAS Error 8" on old GPUs

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
2022-12-02 15:10:24 +03:00
Alexander Borzunov
1ea44b0d3c
Measure throughput for different configs, devices, and dtypes separately (#114) 2022-12-02 15:49:31 +04:00
justheuristic
01838f9a99
Fix Linear8bitlt state config, update tests (#112)
* fix state initializer
* update tests to actually use new code
* keep bias during quantization
2022-12-02 13:04:40 +03:00
Aleksandr Borzunov
96033de921 Fix script for running servers robustly 2022-12-02 09:23:08 +00:00
Aleksandr Borzunov
85cf32d2a4 Add script to run servers robustly 2022-12-02 07:37:45 +00:00
justheuristic
088713912d
Patch Linear8bit to enable CxB backward (#111)
A patch to bitsandbytes 0.34.0 that introduces an option to run backward pass in default (fast) matrix layout.
Authors: cxb inversion by @borzunov, original 8bit code by @timdettmers

* optimized layout inversion code by @borzunov ([original code](https://colab.research.google.com/drive/1EJ0MKifajXSSVq7O2_QGwtb0l6gRAGrh?usp=sharing)) to use less forward calls
* implemented CustomLinear8bitLt, a child of Linear8bitLt that can do backward without CB
* added exact match tests for layouts and linear layers: see tests/test_linear8bitlt.py
* switched petals to the new layer type

Core idea: layouts apply the same permutation to every tile in the matrix. We can treat this as (batched) gather ops.
  Reshape input tensor so that ij-th gather operation op will apply to ij-th elements in each tile.

Prototype: 
Layout info: https://github.com/TimDettmers/bitsandbytes/blob/main/csrc/kernels.cu#L2130-L2136


Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Tim Dettmers <tim.dettmers@gmail.com>
2022-12-02 10:11:21 +03:00
justheuristic
8dc0f513ba
Hotfix span selection (#110)
Fix an issue in span selection that was introduced in #106
2022-12-01 11:21:10 +03:00
justheuristic
a2066a4096
Optimize RemoteSequenceManager (#106)
- [x] made RemoteSequenceManager into a background thread that pre-fetches information instead of running just in time
- [x] moved routing-related stuff to petals.client.routing
- [x] extract remote peer routing information to RemoteSequenceInfo
- [x] made sure that the code survives continued use (e.g. one hour)
- [x] updated every spot where update_ is called manually
- [x] modified get_sequence to check that the thread is alive, warn if not
- [x] removed max_retries, switched rpc_info to exponential backoff
- [x] fixed a bg that causes RemoteSeq* to lose user-defined hyperparameters (e.g. timeout) upon subsequencing (sequential[3:5])
- [x] moved client-side points strategy to client.routing
- [x] ensured that RemoteSequenceManager thread created in get_remote_module properly shuts down when the module is destroyed
- [x] resolved minor affected todos
- [x] modified tests to no longer use PYTHONPATH
- [x] worked around protocol error in rpc_info


Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Artem Chumachenko <artek.chumak@gmail.com>
2022-12-01 10:25:55 +03:00
Artem Chumachenko
7d859a947b
Expose request_timeout to DistributedBloomConfig (#105)
Co-authored-by: Alexander Borzunov <borzunov.alexander@gmail.com>
2022-12-01 09:46:24 +04:00
Max Ryabinin
9faf08b898
Remove unused imports, add missing arguments to docstrings (#108)
* Remove unused imports, add missing arguments to docstrings
2022-12-01 06:47:05 +03:00
justheuristic
b3115dac58
Update throughput.py 2022-12-01 05:39:04 +03:00
Max Ryabinin
5c2990e1b5
Add Dockerfile (#82)
This commit adds a Dockerfile that sets up the environment for Petals, as well as a GitHub Action to build the corresponding image on each push to the main branch.
2022-12-01 04:31:55 +03:00
Alexander Borzunov
0a1cd3b9ba
Fix ptune with low_cpu_mem_usage=True (as in Colab) (#103)
Fixes:

- An exception while creating a model with `ptune/deep_ptune` and `low_cpu_mem_usage=True` (which is currently default).
- dtype mismatch between the prompts and the rest of the model in `.forward()`.
2022-11-30 19:57:42 +04:00
Alexander Borzunov
43ac6016ac
Fix dtypes in backend schemas (#99)
Currently, the schemas use `torch.float32`, so all inputs and outputs converted to float32 before sending and after receiving on both servers and clients. This creates a huge slowdown for the system.

* This PR makes the schemas use the server's `--torch_dtype` argument (default is `torch.bloat16` for BLOOM-176B)
* an option for client to request a specific output compression. Use case 1: client sends quantized inputs and expects quantized inputs in return. Use case 2: client uses quantization for gradients w.r.t. activations, but keeps grads w.r.t. __prompts__ as is for greater precision.
* a comment explaining the purpose of NoSpendingPolicy - since we likely won't have it for the workshop
* a test with custom compression (janky implementation for testing purposes)

Co-authored-by: justheuristic <justheuristic@gmail.com>
2022-11-30 17:40:43 +03:00
Alexander Borzunov
7bd5916744
Make Petals a pip-installable package (attempt 2) (#102)
1. Petals can be now installed using `pip install git+https://github.com/bigscience-workshop/petals`
    - In case if you already cloned the repo, you can do `pip install .` or `pip install .[dev]`
2. Moved `src` => `src/petals`
    - Replaced `from src.smth import smth` with `from petals.smth import smth`
3. Moved `cli` => `src/petals/cli`
    - Replaced `python -m cli.run_smth` with `python -m petals.cli.run_smth` (all utilities are now available right after pip installation)
4. Moved the `requirements*.txt` contents to `setup.cfg` (`requirements.txt` for packages is not supported well by modern packaging utils)
5. Increased the package version from `0.2` to `1.0alpha1`
2022-11-30 10:41:13 +04:00
Alexander Borzunov
0c3781a89c
Shorten bullet points in readme 2022-11-30 07:20:27 +04:00
Alexander Borzunov
ab41223b17
Fix dtype- and device-related client issues (#98)
This PR:

1. Makes inference/forward/backward calls on client remember the dtype and device of source tensors, then move/cast the outputs to the same dtype/device. This way:
    - Users don't need to make changes in the code launching `RemoteSequential` to make it run on a different device.
    - `model.generate()` also starts to support both CPU and GPU.

2. Sets default `low_cpu_mem_usage=True`, client's request timeout to 20 sec.

3. Removes excess casts to float32 left in Dmitry's code.

4. (minor) Improves error messages.
2022-11-29 16:08:02 +04:00
Alexander Borzunov
c6e1b5a8e5
Add various server timeouts, lower --max_batch_size and --inference_max_length defaults (#97) 2022-11-29 10:00:47 +04:00
Alexander Borzunov
d8ef09146e
Improve server's logging (#96)
Log all RPC calls with block indices and shortened peer IDs, print attention cache stats.
2022-11-29 08:45:50 +04:00
Artem Chumachenko
fdb3583a8c
Add Beam Search decoding algorithm (#87)
Add beam_search
2022-11-28 13:02:07 +04:00
Alexander Borzunov
fef7257fe0
Try to fix protobuf versions once again (#95)
The goals of these changes are:

- Make Petals work in Colab right after just doing `pip install -r requirements.txt`
- Make tests work independently of the protobuf package version chosen while installing dependencies
2022-11-28 12:18:03 +04:00
Aleksandr Borzunov
1b51703444 Revert protobuf version change 2022-11-28 07:19:54 +00:00
Alexander Borzunov
b26b0b7121
Require hivemind with fixed compression and protobuf working on Colab (#94) 2022-11-28 10:51:37 +04:00
Alexander Borzunov
8a73b41a42
Make ServerState announcements work better (#93)
- Before this PR, `ServerState.JOINING` was announced only once. This announcement quickly expires in case of the full-size BLOOM, since loading blocks takes several minutes. This PR fixes it, so `ServerState.JOINING` is announced periodically in a thread until blocks are loaded.

- This PR also makes the `Server` class a non-thread, so it runs in the main thread and can catch `KeyboardInterrupt`. This is important, since if we are downloading blocks right now, we need to stop it and send the `ServerState.OFFLINE` message. Note that `ModuleContainer` is still a thread.

- (minor) For the sake of readability, I moved the `ModuleContainer.create()` definition, so it is now defined before `Server.__init__()` (this is because `.create()` is invoked first).
2022-11-28 07:44:03 +04:00
Alexander Borzunov
dc71574a63
Use public swarm by default (#92)
This PR makes servers and clients use public swarm's bootstrap peers if no other initial peers are specified.

If you'd like a server to start a new swarm, provide the `--new_swarm` CLI argument.
2022-11-28 03:44:41 +04:00
Alexander Borzunov
11d6ba683c
Make inference, forward, and backward fully fault-tolerant (#91) 2022-11-27 04:11:54 +04:00