You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
petals/src/petals/client
Alexander Borzunov d40eb6c701
Fix prompt tuning after #464 (#501)
Unfortunately, running inference in models with `"ptune" in config.tuning_mode` was broken after #464.
9 months ago
..
routing Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 10 months ago
__init__.py Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 10 months ago
config.py Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 10 months ago
from_pretrained.py Make client compatible with transformers' GenerationMixin (#464) 10 months ago
inference_session.py Create model index in DHT (#491) 9 months ago
lm_head.py Make client compatible with transformers' GenerationMixin (#464) 10 months ago
ptune.py Make client compatible with transformers' GenerationMixin (#464) 10 months ago
remote_forward_backward.py Move SequenceManagerConfig -> ClientConfig, petals.dht_utils -> petals.utils.dht (#463) 10 months ago
remote_generation.py Fix prompt tuning after #464 (#501) 9 months ago
remote_sequential.py Make client compatible with transformers' GenerationMixin (#464) 10 months ago
sequential_autograd.py Make client compatible with transformers' GenerationMixin (#464) 10 months ago