petals/cli
justheuristic d271b75dd4
Let users specify sequence length instead of assuming 2048 (#52)
- Maximum length is now provided in `.inference_session(max_length=100)`
   - previously, we would always assume max length = 2048
- added a generic way to forward **kwargs to inference session
  - for compatibility with #47 
  - Note to @borzunov : it does *not* pass them arbitrarily, but instead checks for kwarg names at the bottom level
- run_server can be started with a custom max_length for inference
- renamed --cache_size_bytes to --attention_cache_bytes (to avoid collision with --cache_dir)
- --attn_cache_bytes can now support humane file sizes (e.g. 300MB instead of 314572800)
- made some server-side errors more human-readable to user (e.g. when max length is exceeded)

Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com>
Co-authored-by: Alexander Borzunov <hxrussia@gmail.com>
2022-08-29 21:04:37 +03:00
..
__init__.py add quantization script for cpu 2022-06-12 05:59:11 +03:00
config.json add minimalistic benchmarks 2022-06-14 15:18:11 +03:00
convert_model.py Reduce vocabulary size in test model, fix bug in routing when overlapped (#45) 2022-08-17 18:50:52 +03:00
deploy_server.sh integrate mixed-8bit model (#39) 2022-08-04 09:57:37 +03:00
inference_one_block.py WIP: make DistributedBloom compliant with HF interface 2022-07-07 03:11:28 +03:00
local_server_config_example.cfg deploy swarm on local & remote machines 2022-06-29 13:52:43 +03:00
remote_server_config_example.cfg deploy swarm on local & remote machines 2022-06-29 13:52:43 +03:00
run_local_servers.sh integrate mixed-8bit model (#39) 2022-08-04 09:57:37 +03:00
run_remote_servers.sh Sequential and parallel forward / backward (#36) 2022-07-23 14:32:39 +03:00
run_server.py Let users specify sequence length instead of assuming 2048 (#52) 2022-08-29 21:04:37 +03:00
speed_test.py Add automated tests (#23) 2022-07-16 01:59:23 +03:00