You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
petals/src/petals/cli/inference_one_block.py

52 lines
2.1 KiB
Python

import argparse
import torch
from hivemind.utils.logging import get_logger
2 years ago
from tqdm.auto import trange
from transformers import BloomConfig
from transformers.models.bloom.modeling_bloom import build_alibi_tensor
Add LLaMA support (#323) This PR: 1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms. - BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ. - LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name). 2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`. 3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.). 4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers. Upgrade instructions: - Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
11 months ago
from petals.models.bloom.block import BloomBlock
logger = get_logger(__name__)
logger.warning("inference_one_block will soon be deprecated in favour of tests!")
def print_device_info(device=None):
"""Prints device stats. Code from https://stackoverflow.com/a/53374933/12891528"""
device = torch.device(device or ("cuda" if torch.cuda.is_available() else "cpu"))
logger.info(f"Using device: {device}")
# Additional Info when using cuda
if device.type == "cuda":
logger.info(torch.cuda.get_device_name(0))
logger.info(f"Memory Usage:")
logger.info(f"Allocated: {round(torch.cuda.memory_allocated(0) / 1024 ** 3, 1)} GB")
logger.info(f"Cached: {round(torch.cuda.memory_cached(0) / 1024 ** 3, 1)} GB")
2 years ago
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Run a single bloom block locally on dummy data")
parser.add_argument("--config", required=True, type=str, help="Path to a config json file")
parser.add_argument("--state_dict", default=None, type=str, help="Optional path to saved block state dict")
parser.add_argument("--num_steps", default=500, type=int, help="How many inference steps to run")
parser.add_argument("--device", default=None, type=str, help="Run inference on this device")
args = parser.parse_args()
if args.device is None:
2 years ago
args.device = "cuda" if torch.cuda.is_available() else "cpu"
config = BloomConfig.from_json_file(args.config)
block = BloomBlock(config).to(args.device)
cache = None
for i in trange(args.num_steps):
dummy_input = torch.randn(1, 1, config.hidden_size, device=args.device)
alibi = build_alibi_tensor(i + 1, config.num_attention_heads).to(args.device)
with torch.no_grad():
outputs, cache = block.forward(dummy_input, alibi=alibi, use_cache=True, layer_past=cache)
print_device_info(args.device)