diff --git a/README.md b/README.md index 1f0053d..4f622a4 100644 --- a/README.md +++ b/README.md @@ -123,10 +123,9 @@ You can use `{}` to randomly pull values from lists. A list of values separated `imagine "a {lime|blue|silver|aqua} colored dog" -r 2 --seed 0` will generate both "a red dog" and "a black dog" - - - - + + + `imagine "a {_color_} dog" -r 4 --seed 0` will generate four, different colored dogs. The colors will eb pulled from an included phraselist of colors. @@ -362,10 +361,13 @@ would be uncorrelated to the rest of the surrounding image. It created terrible - ✅ codeformer - https://github.com/sczhou/CodeFormer - ✅ image describe feature - - ✅ https://github.com/salesforce/BLIP - - https://github.com/rmokady/CLIP_prefix_caption - - https://github.com/pharmapsychotic/clip-interrogator (blip + clip) + - 🚫 CLIP brute-force prompt reconstruction + - The accuracy of this approach is too low for me to include it in imaginAIry + - https://github.com/rmokady/CLIP_prefix_caption + - https://github.com/pharmapsychotic/clip-interrogator (blip + clip) - https://github.com/KaiyangZhou/CoOp - - CPU support + - 🚫 CPU support. While the code does actually work on some CPUs, the generation takes so long that I don't think it's + worth the effort to support this feature - ✅ img2img for plms - img2img for kdiff functions - Other diff --git a/setup.py b/setup.py index 6bfc0ac..37a6f9c 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ setup( name="imaginAIry", author="Bryce Drennan", # author_email="b r y p y d o t io", - version="2.3.1", + version="2.4.0", description="AI imagined images. Pythonic generation of stable diffusion images.", long_description=readme, long_description_content_type="text/markdown",