* feature: updates refiners vendored library
has a small bugfix that will soon be replaced by a better fix from upstream refiners
Co-authored-by: Bryce <github20210803@accounts.brycedrennan.com>
- allow generation at any size
- output "bounce" animations
- choose output format: mp4, webp, or gif
- fix random seed handling
- organize some code better
* docs: adds docs tool, material for mkdocs, along with more fleshed out docstrings.
this includes ability to serve up a local docs website.
---------
Co-authored-by: Bryce <github20210803@accounts.brycedrennan.com>
fixes#424
- pref: improve memory usage when loading SD15.
- feature: clean up CLI output more
- feature: cuda memory tracking context manager
- feature: use safetensors fp16 for sd15
- recording timing and memory usage of various steps
- re-use logging context for composition images
- load sdxl weights in a more VRAM efficient way
- switch to diffusers weights for default weights for sd15
- adds support for (SDXL)[https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0]
- adds sliced encoding/decoding to refiners sdxl pipeline
- doesn't support inpainting, controlnets
- monkeypatches self_attention_guidance to use sliced attention
- adds a bunch of model weight translation utilities and weightmaps
- add [opendalle 1.1](https://huggingface.co/dataautogpt3/OpenDalleV1.1)
- change default model to opendalle
- fix: better handle special characters in path inputs on command line
**todo**
- add tests
- use face enhancement in a smarter way that doesn't blur high-res images
- use a different upscale model for composition images
**Upscaling**
RealESRGAN is great but it blurs parts of images it doesn't understand
4xUltrasharp is a finetune of RealESRGan that isn't as good but doesn't have this blurry patch problem. This makes it more suitable to use as part of the composition/upscale process. We still use realesrgan for any last-step upscales since it does look better.
had to write a state dict translator to use the ultrasharp model
**Face Enhancement**
We no longer enhance faces that are larger than 512 pixels. They should already have enough details and the face enhancer doesn't produce faces at high enough resolution to look good at that size.