just running `aimg --help` or `aimg --version` was very slow due to all the imports being brought in eagerly
Before changes `aimg --help`
`2.24s user 4.05s system 184% cpu 3.416 total`
After changes:
`0.04s user 0.02s system 8% cpu 0.625 total`
Used `PYTHONPROFILEIMPORTTIME=1 aimg --help` to find time consuming imports.
Also switched to using `scripts` instead of `entrypoints` since the scripts are much faster.
Made duplicate SAMPLER_TYPE_OPTIONS that can be loaded without loading all the samplers themselves.
Likely a breaking change - not sure.
Loading png images for captioning will cause the following error since PIL loads png images in RGBA mode.
```python
File "./miniconda3/lib/python3.8/site-packages/torchvision/transforms/functional_tensor.py", line 940, in normalize
return tensor.sub_(mean).div_(std)
RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
```
If a model has a remote filename with characters that might not be supported on
the local cache filesystem, replace them during translation of URL to cached
file path.
Closes#202.
If one uses `hf_hub_download` only referencing specific commits the `refs` folder will not be created even though data will be cached via `snapshots` and `blobs`. Subsequent calls to `try_to_load_from_cache` will return None even though the desired data was in the cache.
Example:
```python
# download something
hf_hub_download(repo_id=repo, revision=commit_hash, filename=filepath, token=token)
# returns None
try_to_load_from_cache(repo_id=repo, revision=commit_hash, filename=filepath)
```
https://github.com/huggingface/huggingface_hub/pull/1306