2022-07-19 01:28:04 +00:00
|
|
|
import pytest
|
|
|
|
import torch
|
2023-01-03 15:35:51 +00:00
|
|
|
import torch.nn.functional as F
|
2023-03-12 21:49:04 +00:00
|
|
|
from hivemind import DHT, BatchTensorDescriptor, get_logger
|
2022-11-30 14:40:43 +00:00
|
|
|
from hivemind.proto import runtime_pb2
|
2022-07-19 01:28:04 +00:00
|
|
|
|
2023-08-08 15:10:27 +00:00
|
|
|
from petals import AutoDistributedConfig
|
2022-11-30 14:40:43 +00:00
|
|
|
from petals.client import RemoteSequenceManager, RemoteSequential
|
|
|
|
from petals.data_structures import UID_DELIMITER
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
from petals.server.from_pretrained import load_pretrained_block
|
2023-03-12 21:49:04 +00:00
|
|
|
from test_utils import *
|
2022-07-19 01:28:04 +00:00
|
|
|
|
2023-02-19 01:46:17 +00:00
|
|
|
logger = get_logger(__name__)
|
2022-07-19 01:28:04 +00:00
|
|
|
|
|
|
|
|
|
|
|
@pytest.mark.forked
|
|
|
|
def test_remote_sequential():
|
2023-08-08 15:10:27 +00:00
|
|
|
config = AutoDistributedConfig.from_pretrained(MODEL_NAME, initial_peers=INITIAL_PEERS)
|
2022-07-19 01:28:04 +00:00
|
|
|
dht = DHT(initial_peers=config.initial_peers, client_mode=True, start=True)
|
|
|
|
test_inputs = torch.randn(1, 5, config.hidden_size, requires_grad=True)
|
|
|
|
grad_proj = torch.randn(1, 5, config.hidden_size)
|
|
|
|
|
Refactor RemoteSequenceManager (#309)
This PR:
1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**
The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.
2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**
`dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.
3. **Simplifies retry logic.**
Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.
4. **Removes deprecated `RemoteTransformerBlock`.**
`RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.
5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**
This functions duplicate the functionality of the `RemoteSequential` constructor.
6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**
This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 09:41:13 +00:00
|
|
|
sequential = RemoteSequential(config, dht=dht)
|
2022-07-19 01:28:04 +00:00
|
|
|
|
|
|
|
full_outputs = sequential(test_inputs)
|
|
|
|
(full_outputs * grad_proj).sum().backward()
|
|
|
|
assert test_inputs.grad is not None
|
|
|
|
full_grad = test_inputs.grad.clone()
|
|
|
|
test_inputs.grad.data.zero_()
|
|
|
|
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
first_half = sequential[: config.num_hidden_layers // 2]
|
|
|
|
second_half = sequential[config.num_hidden_layers // 2 :]
|
2022-07-19 01:28:04 +00:00
|
|
|
assert len(first_half) + len(second_half) == len(sequential)
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
assert abs(len(first_half) - len(second_half)) == config.num_hidden_layers % 2
|
2022-07-19 01:28:04 +00:00
|
|
|
for m in sequential, first_half, second_half:
|
|
|
|
assert isinstance(repr(m), str)
|
|
|
|
|
|
|
|
hidden = first_half(test_inputs)
|
|
|
|
assert isinstance(hidden, torch.Tensor)
|
|
|
|
assert hidden.shape == test_inputs.shape
|
|
|
|
assert hidden.requires_grad
|
|
|
|
second_half_outputs = second_half(hidden)
|
2023-08-11 05:24:33 +00:00
|
|
|
assert torch.allclose(second_half_outputs, full_outputs, atol=1e-3)
|
2022-07-19 01:28:04 +00:00
|
|
|
|
|
|
|
(second_half_outputs * grad_proj).sum().backward()
|
2023-08-14 06:41:13 +00:00
|
|
|
assert torch.allclose(test_inputs.grad, full_grad, atol=3e-2)
|
2022-08-12 15:28:21 +00:00
|
|
|
|
2022-11-30 14:40:43 +00:00
|
|
|
# test RemoteSequential with lossy compression
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
block_uids = [f"{config.dht_prefix}{UID_DELIMITER}{i}" for i in range(config.num_hidden_layers)]
|
2022-11-30 14:40:43 +00:00
|
|
|
lossy_sequential = RemoteSequential(
|
Refactor RemoteSequenceManager (#309)
This PR:
1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**
The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.
2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**
`dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.
3. **Simplifies retry logic.**
Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.
4. **Removes deprecated `RemoteTransformerBlock`.**
`RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.
5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**
This functions duplicate the functionality of the `RemoteSequential` constructor.
6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**
This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 09:41:13 +00:00
|
|
|
config, sequence_manager=DummyCustomSequenceManager(config, block_uids, dht=dht)
|
2022-11-30 14:40:43 +00:00
|
|
|
)
|
|
|
|
|
|
|
|
test_inputs.grad = None
|
|
|
|
approx_outputs = lossy_sequential(test_inputs)
|
|
|
|
(approx_outputs * grad_proj).sum().backward()
|
|
|
|
|
|
|
|
assert not torch.allclose(approx_outputs, full_outputs, rtol=0, atol=1e-4), "compression was not used"
|
2023-08-08 15:10:27 +00:00
|
|
|
assert not torch.allclose(test_inputs.grad, full_grad, rtol=0, atol=1e-3), "compression was not used"
|
2022-11-30 14:40:43 +00:00
|
|
|
assert abs(approx_outputs - full_outputs).mean() < 0.01
|
2022-12-01 07:25:55 +00:00
|
|
|
absmax = abs(full_grad).max()
|
2023-01-03 15:35:51 +00:00
|
|
|
assert abs(test_inputs.grad / absmax - full_grad / absmax).mean() < 0.05
|
2022-11-30 14:40:43 +00:00
|
|
|
|
|
|
|
|
|
|
|
class DummyCustomSequenceManager(RemoteSequenceManager):
|
|
|
|
"""A sequence manager that compresses inputs/outputs during forward and backward pass."""
|
|
|
|
|
|
|
|
@property
|
|
|
|
def rpc_info(self):
|
|
|
|
rpc_info = super().rpc_info
|
|
|
|
dims = (2048, 1024)
|
|
|
|
compressed_input_schema = BatchTensorDescriptor(dims, compression=runtime_pb2.CompressionType.FLOAT16)
|
|
|
|
rpc_info["forward_schema"] = (compressed_input_schema,), dict() # (args, kwargs)
|
|
|
|
return rpc_info
|
|
|
|
|
|
|
|
def get_request_metadata(self, protocol: str, *args, **kwargs):
|
2022-12-01 07:25:55 +00:00
|
|
|
metadata = super().get_request_metadata(protocol, *args, **kwargs)
|
2022-11-30 14:40:43 +00:00
|
|
|
if protocol == "rpc_forward":
|
2022-12-01 07:25:55 +00:00
|
|
|
metadata["output_compression"] = (runtime_pb2.CompressionType.FLOAT16,)
|
2022-11-30 14:40:43 +00:00
|
|
|
elif protocol == "rpc_backward":
|
2023-07-03 16:13:04 +00:00
|
|
|
metadata["output_compression"] = (runtime_pb2.CompressionType.FLOAT16,)
|
|
|
|
# FIXME: Initially, we used CompressionType.BLOCKWISE_8BIT for rpc_backward() here.
|
|
|
|
# This is currently broken since hivemind==1.1.8 is not compatible with bitsandbytes==0.39.1.
|
|
|
|
# Please revert to BLOCKWISE_8BIT once this is fixed: https://github.com/learning-at-home/hivemind/issues/572
|
2022-12-01 07:25:55 +00:00
|
|
|
return metadata
|
2022-11-30 14:40:43 +00:00
|
|
|
|
2022-08-12 15:28:21 +00:00
|
|
|
|
|
|
|
@pytest.mark.forked
|
|
|
|
def test_remote_sequential_prompts(batch_size=2, seq_len=5, pre_seq_len=3):
|
2023-08-08 15:10:27 +00:00
|
|
|
config = AutoDistributedConfig.from_pretrained(MODEL_NAME, initial_peers=INITIAL_PEERS)
|
Refactor RemoteSequenceManager (#309)
This PR:
1. **Extracts `SequenceManagerConfig` and `SequenceManagerState` subclasses.**
The config is provided by caller and never changed from inside `RemoteSequenceManager`. The state is a part of the `RemoteSequenceManager`'s state shared between the main manager and its slices. We fix some slicing bugs along the way.
2. **Removes `dht_prefix` and `p2p` arguments, makes `dht` argument optional.**
`dht_prefix` can always be overridden using `config.dht_prefix`. `p2p` actually needed only under the hood of `RemoteSequenceManager`, so it can extract it by itself without exposing this low-level class to callers. If strictly necessary, a caller can provide `p2p` as a part of `SequenceManagerState`. `dht` is also needed only by `RemoteSequenceManager`, so we can make it optional in the parent classes and create it automatically when it's not provided.
3. **Simplifies retry logic.**
Previously, we could have "nested" retry loops: one in `._update()`, another in inference/forward/backward steps. The loop in `._update()` could introduce issues to concurrent inference/forward/backward calls, since it blocks the entire class if its delay period becomes too high. Now this logic is simplified: `._update()` performs only one attempt to fetch the DHT info, any retries are triggered by the inference/forward/backward steps.
4. **Removes deprecated `RemoteTransformerBlock`.**
`RemoteTransformerBlock` was deprecated a long time ago, before Petals 1.0.0. Its removal is long due.
5. **Removes `dht_utils.get_remote_module()`, `dht_utils.get_remote_sequence()`.**
This functions duplicate the functionality of the `RemoteSequential` constructor.
6. (minor) **Removes `RemoteSequential.is_subsequence` flag.**
This flag worked incorrectly and was never used. I am removing it for the sake of simplicity.
2023-05-07 09:41:13 +00:00
|
|
|
remote_sequential = RemoteSequential(config)
|
2022-08-12 15:28:21 +00:00
|
|
|
|
2023-01-03 15:35:51 +00:00
|
|
|
inputs = F.normalize(torch.randn(batch_size, seq_len, config.hidden_size), dim=-1)
|
|
|
|
output_proj = F.normalize(torch.randn(batch_size, seq_len + pre_seq_len, config.hidden_size), dim=-1)
|
|
|
|
input_prompts = F.normalize(torch.randn(batch_size, pre_seq_len, config.hidden_size, requires_grad=True), dim=-1)
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
intermediate_prompts = torch.randn(
|
|
|
|
config.num_hidden_layers, batch_size, pre_seq_len, config.hidden_size, requires_grad=True
|
|
|
|
)
|
2022-08-12 15:28:21 +00:00
|
|
|
|
|
|
|
input_prompts = input_prompts.detach().requires_grad_(True)
|
|
|
|
intermediate_prompts = intermediate_prompts.detach().requires_grad_(True)
|
|
|
|
|
|
|
|
inputs_with_prompts = torch.cat([inputs, input_prompts], dim=1)
|
|
|
|
assert inputs_with_prompts.shape == (batch_size, seq_len + pre_seq_len, config.hidden_size)
|
|
|
|
|
|
|
|
outputs = remote_sequential(inputs_with_prompts, prompts=intermediate_prompts)
|
|
|
|
|
|
|
|
(outputs * output_proj).sum().backward()
|
|
|
|
assert intermediate_prompts.grad is not None
|
|
|
|
|
|
|
|
input_prompts_ref = input_prompts.clone().detach().requires_grad_(True)
|
|
|
|
intermediate_prompts_ref = intermediate_prompts.clone().detach().requires_grad_(True)
|
|
|
|
|
|
|
|
assert input_prompts_ref.grad is None
|
|
|
|
assert intermediate_prompts_ref.grad is None
|
|
|
|
|
|
|
|
outputs_ref = torch.cat([inputs, input_prompts_ref], dim=1)
|
Add LLaMA support (#323)
This PR:
1. **Abolishes the model conversion procedure.** Now, models are downloaded directly from original repositories like https://huggingface.co/bigscience/bloom. Servers download only shards with blocks to be hosted, and clients download only shards with input/output embeddings and layernorms.
- BLOOM is loaded from `bigscience/bloom`, but we use the DHT prefix `bigscience/bloom-petals` for backward compatibility. Same with smaller BLOOMs and BLOOMZ.
- LLaMA can be loaded from any repo like `username/llama-65b-hf`, but we use the DHT prefix `llama-65b-hf` (without the username) to accomodate blocks from different repos (there're a few of them with minor differences, such as `Llama` vs. `LLaMA` in the class name).
2. **Refactors the client to generalize it for multiple models.** Now, we have `petals.models` packages that contain model-specific code (e.g. `petals.models.bloom`, `petals.models.llama`). General code (e.g. CPU-efficient LM head, p-tuning) is kept in `petals.client`.
3. **Introduces** `WrappedLlamaBlock`, `DistributedLlamaConfig`, `DistributedLlamaForCausalLM`, `DistributedLlamaForSequenceClassification`, and `DistributedLlamaModel` compatible with Petals functionality (p-tuning, adapters, etc.).
4. **Introduces** `AutoDistributedConfig` that automatically chooses the correct config class (`DistributedLlamaConfig` or `DistributedBloomConfig`). The refactored configs contain all model-specific info for both clients and servers.
Upgrade instructions:
- Remove disk caches for blocks in old (converted) format to save disk space. That is, remove `~/.cache/petals/model--bigscience--bloom-petals` and `~/.cache/petals/model--bigscience--bloomz-petals` directories (if present).
2023-06-23 11:46:10 +00:00
|
|
|
for block_index in range(config.num_hidden_layers):
|
2022-08-12 15:28:21 +00:00
|
|
|
block_prompt = intermediate_prompts_ref[block_index]
|
|
|
|
outputs_ref[:, : block_prompt.shape[1]] += block_prompt
|
|
|
|
|
|
|
|
block = load_pretrained_block(MODEL_NAME, block_index=block_index, torch_dtype=torch.float32)
|
|
|
|
(outputs_ref,) = block(outputs_ref)
|
|
|
|
|
2023-01-03 15:35:51 +00:00
|
|
|
assert torch.allclose(outputs_ref, outputs, atol=1e-3)
|
2022-08-12 15:28:21 +00:00
|
|
|
|
|
|
|
(outputs_ref * output_proj).sum().backward()
|
|
|
|
assert input_prompts_ref.grad is not None
|
2023-08-30 02:59:33 +00:00
|
|
|
assert torch.allclose(input_prompts_ref.grad, input_prompts.grad, atol=3e-2)
|
2022-08-12 15:28:21 +00:00
|
|
|
assert intermediate_prompts_ref.grad is not None
|
2023-01-03 15:35:51 +00:00
|
|
|
assert torch.allclose(intermediate_prompts_ref.grad, intermediate_prompts.grad, atol=1e-2)
|