diff --git a/README.md b/README.md index c5e6872..8bd4b26 100644 --- a/README.md +++ b/README.md @@ -8,16 +8,14 @@

-**Warning: Llama 3.1 support is still under construction!** the latest models require custom RoPE configuration that we do not have in Petals yet; we will update the code to fix that within a day. - -Generate text with distributed **Llama (1-3)** (70B), **Falcon** (40B+), **BLOOM** (176B) (or their derivatives), and fine‑tune them for your own tasks — right from your desktop computer or Google Colab: +Generate text with distributed **Llama 3.1** (up to 405B), **Mixtral** (8x7B), **Falcon** (40B+), or **BLOOM** (176B) and fine‑tune them for your own tasks — right from your desktop computer or Google Colab: ```python from transformers import AutoTokenizer from petals import AutoDistributedModelForCausalLM # Choose any model available at https://health.petals.dev -model_name = "petals-team/StableBeluga2" # This one is fine-tuned Llama 2 (70B) +model_name = "meta-llama/Meta-Llama-3.1-405B-Instruct" # Connect to a distributed network hosting model layers tokenizer = AutoTokenizer.from_pretrained(model_name) @@ -33,22 +31,26 @@ print(tokenizer.decode(outputs[0])) # A cat sat on a mat... πŸš€  Try now in Colab

-πŸ” **Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust. +πŸ¦™ **Want to run Llama?** [Request access](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to its weights, then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev). -πŸ¦™ **Want to run Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and πŸ€— [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), then run `huggingface-cli login` in the terminal before loading the model. Or just try it in our [chatbot app](https://chat.petals.dev). +πŸ” **Privacy.** Your data will be processed with the help of other people in the public swarm. Learn more about privacy [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). For sensitive data, you can set up a [private swarm](https://github.com/bigscience-workshop/petals/wiki/Launch-your-own-swarm) among people you trust. πŸ’¬ **Any questions?** Ping us in [our Discord](https://discord.gg/KdThf2bWVU)! ## Connect your GPU and increase Petals capacity -Petals is a community-run system — we rely on people sharing their GPUs. You can check out [available models](https://health.petals.dev) and help serving one of them! As an example, here is how to host a part of [Stable Beluga 2](https://huggingface.co/stabilityai/StableBeluga2) on your GPU: +Petals is a community-run system — we rely on people sharing their GPUs. You can help serving one of the [available models](https://health.petals.dev) or host a new model from πŸ€— [Model Hub](https://huggingface.co/models)! + +As an example, here is how to host a part of [Llama 3.1 (405B) Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) on your GPU: + +πŸ¦™ **Want to host Llama?** [Request access](https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct) to its weights, then run `huggingface-cli login` in the terminal before loading the model. 🐧 **Linux + Anaconda.** Run these commands for NVIDIA GPUs (or follow [this](https://github.com/bigscience-workshop/petals/wiki/Running-on-AMD-GPU) for AMD): ```bash conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia pip install git+https://github.com/bigscience-workshop/petals -python -m petals.cli.run_server petals-team/StableBeluga2 +python -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct ``` πŸͺŸ **Windows + WSL.** Follow [this guide](https://github.com/bigscience-workshop/petals/wiki/Run-Petals-server-on-Windows) on our Wiki. @@ -58,7 +60,7 @@ python -m petals.cli.run_server petals-team/StableBeluga2 ```bash sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \ learningathome/petals:main \ - python -m petals.cli.run_server --port 31330 petals-team/StableBeluga2 + python -m petals.cli.run_server --port 31330 meta-llama/Meta-Llama-3.1-405B-Instruct ``` 🍏 **macOS + Apple M1/M2 GPU.** Install [Homebrew](https://brew.sh/), then run these commands: @@ -66,19 +68,17 @@ sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cach ```bash brew install python python3 -m pip install git+https://github.com/bigscience-workshop/petals -python3 -m petals.cli.run_server petals-team/StableBeluga2 +python3 -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct ```

πŸ“š  Learn more (how to use multiple GPUs, start the server on boot, etc.)

-πŸ’¬ **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)! - -πŸ¦™ **Want to host Llama 2?** Request access to its weights at the ♾️ [Meta AI website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and πŸ€— [Model Hub](https://huggingface.co/meta-llama/Llama-2-70b-hf), generate an πŸ”‘ [access token](https://huggingface.co/settings/tokens), then add `--token YOUR_TOKEN_HERE` to the `python -m petals.cli.run_server` command. - πŸ”’ **Security.** Hosting a server does not allow others to run custom code on your computer. Learn more [here](https://github.com/bigscience-workshop/petals/wiki/Security,-privacy,-and-AI-safety). +πŸ’¬ **Any questions?** Ping us in [our Discord](https://discord.gg/X7DgtxgMhc)! + πŸ† **Thank you!** Once you load and host 10+ blocks, we can show your name or link on the [swarm monitor](https://health.petals.dev) as a way to say thanks. You can specify them with `--public_name YOUR_NAME`. ## How does it work? @@ -122,22 +122,39 @@ Please see **Section 3.3** of our [paper](https://arxiv.org/pdf/2209.01188.pdf). Please see our [FAQ](https://github.com/bigscience-workshop/petals/wiki/FAQ:-Frequently-asked-questions#contributing) on contributing. -### πŸ“œ Citation +### πŸ“œ Citations Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. [Petals: Collaborative Inference and Fine-tuning of Large Models.](https://arxiv.org/abs/2209.01188) -_arXiv preprint arXiv:2209.01188,_ 2022. +_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)._ 2023. ```bibtex -@article{borzunov2022petals, +@inproceedings{borzunov2023petals, title = {Petals: Collaborative Inference and Fine-tuning of Large Models}, - author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin}, - journal = {arXiv preprint arXiv:2209.01188}, - year = {2022}, + author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Riabinin, Maksim and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin}, + booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, + pages = {558--568}, + year = {2023}, url = {https://arxiv.org/abs/2209.01188} } ``` +Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, and Colin Raffel. +[Distributed inference and fine-tuning of large language models over the Internet.](https://arxiv.org/abs/2312.08361) +_Advances in Neural Information Processing Systems_ 36 (2024). + +```bibtex +@inproceedings{borzunov2023distributed, + title = {Distributed inference and fine-tuning of large language models over the {I}nternet}, + author = {Borzunov, Alexander and Ryabinin, Max and Chumachenko, Artem and Baranchuk, Dmitry and Dettmers, Tim and Belkada, Younes and Samygin, Pavel and Raffel, Colin}, + booktitle = {Advances in Neural Information Processing Systems}, + volume = {36}, + pages = {12312--12331}, + year = {2023}, + url = {https://arxiv.org/abs/2312.08361} +} +``` + --------------------------------------------------------------------------------

diff --git a/src/petals/models/bloom/block.py b/src/petals/models/bloom/block.py index 439b9ca..01a74b2 100644 --- a/src/petals/models/bloom/block.py +++ b/src/petals/models/bloom/block.py @@ -7,7 +7,7 @@ from typing import Optional, Tuple import torch from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask -from transformers.models.bloom.modeling_bloom import BloomBlock, BloomModel, build_alibi_tensor +from transformers.models.bloom.modeling_bloom import BloomBlock, build_alibi_tensor from petals.utils.misc import is_dummy diff --git a/src/petals/models/bloom/config.py b/src/petals/models/bloom/config.py index cda4cf7..d9b4f25 100644 --- a/src/petals/models/bloom/config.py +++ b/src/petals/models/bloom/config.py @@ -24,7 +24,7 @@ class DistributedBloomConfig(BloomConfig, ClientConfig, PTuneConfig, LMHeadConfi def from_pretrained( cls, model_name_or_path: Union[str, os.PathLike, None], *args, dht_prefix: Optional[str] = None, **kwargs ): - logger.info("Make sure you follow the BLOOM's terms of use: https://bit.ly/bloom-license") + logger.info("Make sure you follow the BLOOM terms of use: https://bit.ly/bloom-license") loading_from_repo = model_name_or_path is not None and not os.path.isdir(model_name_or_path) if loading_from_repo and dht_prefix is None: diff --git a/src/petals/models/llama/block.py b/src/petals/models/llama/block.py index 77f9a05..4ff9d3f 100644 --- a/src/petals/models/llama/block.py +++ b/src/petals/models/llama/block.py @@ -15,7 +15,6 @@ from transformers.models.llama.modeling_llama import ( LlamaConfig, LlamaDecoderLayer, LlamaMLP, - LlamaModel, LlamaRMSNorm, repeat_kv, rotate_half, @@ -132,7 +131,8 @@ class OptimizedLlamaDecoderLayer(LlamaDecoderLayer): def __init__(self, config: LlamaConfig): nn.Module.__init__(self) self.hidden_size = config.hidden_size - self.self_attn = OptimizedLlamaAttention(config=config) + self.self_attn = OptimizedLlamaAttention(config=config, layer_idx=0) + # layer_idx only matters for KV caching, and we re-implement it in Petals self.mlp = LlamaMLP(config) self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) diff --git a/src/petals/models/llama/config.py b/src/petals/models/llama/config.py index ae71a4c..d61f5e8 100644 --- a/src/petals/models/llama/config.py +++ b/src/petals/models/llama/config.py @@ -27,8 +27,8 @@ class DistributedLlamaConfig(LlamaConfig, ClientConfig, PTuneConfig, LMHeadConfi cls, model_name_or_path: Union[str, os.PathLike, None], *args, dht_prefix: Optional[str] = None, **kwargs ): logger.info( - "Make sure you follow the LLaMA's terms of use: " - "https://bit.ly/llama2-license for LLaMA 2, https://bit.ly/llama-license for LLaMA 1" + "Make sure you follow the Llama terms of use: " + "https://llama.meta.com/llama3/license, https://llama.meta.com/llama2/license" ) loading_from_repo = model_name_or_path is not None and not os.path.isdir(model_name_or_path) diff --git a/src/petals/models/mixtral/block.py b/src/petals/models/mixtral/block.py index 7a2bd9f..58acd14 100644 --- a/src/petals/models/mixtral/block.py +++ b/src/petals/models/mixtral/block.py @@ -1,4 +1,3 @@ -import json from typing import Optional, Tuple import torch @@ -8,7 +7,7 @@ from transformers.modeling_attn_mask_utils import ( _prepare_4d_causal_attention_mask, _prepare_4d_causal_attention_mask_for_sdpa, ) -from transformers.models.mixtral.modeling_mixtral import MixtralDecoderLayer, MixtralModel +from transformers.models.mixtral.modeling_mixtral import MixtralDecoderLayer class WrappedMixtralBlock(MixtralDecoderLayer): diff --git a/src/petals/server/from_pretrained.py b/src/petals/server/from_pretrained.py index 4a3b150..ac6a23e 100644 --- a/src/petals/server/from_pretrained.py +++ b/src/petals/server/from_pretrained.py @@ -64,10 +64,6 @@ def load_pretrained_block( max_disk_space=max_disk_space, ) - # dummy load, check that keys match - report = block.load_state_dict(state_dict, strict=False) - assert not report.missing_keys, f"Some block weights are missing: {report.missing_keys}" - for param_name, _ in block.named_parameters(): assert param_name in state_dict, f"{param_name} not in state dict" param = state_dict[param_name] @@ -76,7 +72,6 @@ def load_pretrained_block( set_module_tensor_to_device(block, param_name, "cpu", value=param, dtype=param.dtype) logger.info(f"Loaded {model_name} block {block_index}") - logger.debug(f"Details: {report}") return block diff --git a/src/petals/utils/peft.py b/src/petals/utils/peft.py index 5d93ce6..7817f73 100644 --- a/src/petals/utils/peft.py +++ b/src/petals/utils/peft.py @@ -267,7 +267,7 @@ def estimate_adapter_memory_per_block( **load_peft_kwargs, ) -> int: """Get the number of extra bytes used to store a set of adapters per given block""" - with init_empty_weights(include_buffers=True): + with init_empty_weights(include_buffers=False): block = get_model_block(block_config) base_block_parameters = sum(p.numel() for p in block.parameters()) create_lora_adapter(block)