mirror of
https://github.com/bigscience-workshop/petals
synced 2024-10-31 09:20:41 +00:00
d271b75dd4
- Maximum length is now provided in `.inference_session(max_length=100)` - previously, we would always assume max length = 2048 - added a generic way to forward **kwargs to inference session - for compatibility with #47 - Note to @borzunov : it does *not* pass them arbitrarily, but instead checks for kwarg names at the bottom level - run_server can be started with a custom max_length for inference - renamed --cache_size_bytes to --attention_cache_bytes (to avoid collision with --cache_dir) - --attn_cache_bytes can now support humane file sizes (e.g. 300MB instead of 314572800) - made some server-side errors more human-readable to user (e.g. when max length is exceeded) Co-authored-by: Aleksandr Borzunov <borzunov.alexander@gmail.com> Co-authored-by: Alexander Borzunov <hxrussia@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
config.json | ||
convert_model.py | ||
deploy_server.sh | ||
inference_one_block.py | ||
local_server_config_example.cfg | ||
remote_server_config_example.cfg | ||
run_local_servers.sh | ||
run_remote_servers.sh | ||
run_server.py | ||
speed_test.py |