Compare commits

...

205 Commits

Author SHA1 Message Date
Bryce
1c79db1d78 version: 15.0.0 2024-09-21 19:35:53 -07:00
Bryce
a512ed7032 build: update requirements 2024-09-21 19:35:53 -07:00
Bryce
26c0a5608b docs: disable doc check for now 2024-09-21 19:35:53 -07:00
Bryce
07d43219d1 feature: flux model
works on my nvidia 4090 and probably not many other places. rush job. will not be providing support but will accept pull requests
2024-09-21 19:35:53 -07:00
Bryce
e72b54ec63 docs: fix ref. add build job 2024-09-21 19:35:53 -07:00
Bryce
170d6162e2 fix: update sd15 weight links
Fixes #488.

See also https://huggingface.co/posts/dn6/357701279407928
2024-09-21 15:22:37 -07:00
Bryce
1f0ae5ffa6 version: 14.3.0 2024-04-18 01:50:33 -07:00
Bryce
12e66a727f ci: faster tests 2024-04-18 01:26:13 -07:00
Bryce
1faea372f9 fix: cleanup logging - remove unnecessary version checks 2024-04-18 01:26:13 -07:00
jaydrennan
cc79cac5fc refactor: ruff formatting 2024-04-18 01:26:13 -07:00
jaydrennan
964dd4ead7 feature: integrates spandrel for upscaling 2024-04-18 01:26:13 -07:00
Bryce
efbab3a141 fix: be forgiving of - vs _ in video decoder model name
Addresses https://github.com/brycedrennan/imaginAIry/issues/485
2024-04-17 20:00:36 -07:00
Bryce
3a9a3974ce fix: allow referencing local paths for sdxl model weights
Addresses https://github.com/brycedrennan/imaginAIry/issues/484
2024-04-17 20:00:36 -07:00
Bryce Drennan
3c1c695f76
feature: cloth segmentation (#482) 2024-04-04 23:27:18 -07:00
Bryce Drennan
df86aa6668
feature: densepose controlnet (#481) 2024-04-04 22:02:25 -07:00
Bryce
ce37e60b11 version: 14.2.0 2024-03-17 01:26:10 -07:00
Bryce Drennan
49f2c25b6b
feature: IP-Adapter (#477)
todo
- allow specification ip adapter weights/arch


---------

Co-authored-by: jaydrennan <jsdman1313@gmail.com>
2024-03-17 00:52:14 -07:00
Bryce
9c48b749d8 feature: script for running imaginairy in the modal.com cloud 2024-03-16 21:17:32 -07:00
Bryce
76b6fa8b65 tests: fix them 2024-03-15 12:10:08 -07:00
Bryce
9cdacd454f style: use latest ruff 2024-03-15 11:32:24 -07:00
Bryce
a8acb451c5 ci: use uv
waiting for this issue to be resolved before using it for pip-compile

https://github.com/astral-sh/uv/issues/1624

and it didn't properly install the command line tools `aimg` and `imagine` so not using it for editable install on github either
2024-03-05 21:49:18 -08:00
Bryce
e6a1c988c5 fix: if weights are float32 but float16 was specified, still use float16 2024-01-20 12:35:58 -08:00
Bryce
cf8a44b317 feature: update refiners
better handles img2img (partial diffusion runs)
2024-01-20 12:35:58 -08:00
jaydrennan
1bf53e47cf
feature: updates refiners vendored library (#458)
* feature: updates refiners vendored library

has a small bugfix that will soon be replaced by a better fix from upstream refiners

Co-authored-by: Bryce <github20210803@accounts.brycedrennan.com>
2024-01-19 08:45:23 -08:00
Bryce
fbb16f6c62 version: 14.1.1 2024-01-18 06:49:17 -08:00
Bryce
279dc28b1d build: triton only has wheels for linux 2024-01-18 06:47:14 -08:00
Bryce
3906072191 ci: test installation on windows, mac, and conda 2024-01-18 06:47:14 -08:00
Bryce
60baed839c build: remove some dependency version limits 2024-01-17 21:31:05 -08:00
Bryce
4639dfb041 version: 4.1.0 2024-01-17 21:09:14 -08:00
Bryce Drennan
1de5d28554
test: run some tests on python 3.11 as well (#454)
torch doesn't support 3.12 yet
2024-01-14 18:49:12 -08:00
Bryce Drennan
601a112dc3
refactor: move download related functions to separate module (#453)
+ renames and typehints
2024-01-14 16:50:17 -08:00
Bryce Drennan
502ffbdc63
feature: sdxl inpaint support (#450) 2024-01-13 18:13:48 -08:00
Bryce Drennan
700cb457b9
feature: support loading sdxl compvis weights (#449) 2024-01-13 13:43:15 -08:00
Bryce Drennan
907e80d1f2
feature: video interpolation (#448)
- uses rife algorithm to interpolate frames
2024-01-08 09:00:22 -08:00
Bryce Drennan
bb2dd45cf2
feature: videogen improvements (#447)
- allow generation at any size
- output "bounce" animations
- choose output format: mp4, webp, or gif
- fix random seed handling
- organize some code better
2024-01-07 18:11:20 -08:00
Bryce Drennan
c5199ed7cc
docs: minor fixes (#446) 2024-01-07 17:29:29 -08:00
Bryce Drennan
66b17cc315
docs: add github action to push docs (#444) 2024-01-06 18:29:29 -08:00
Bryce
807c976da3 build: remove imageio dependency 2024-01-06 17:23:27 -08:00
Bryce
d2609cb5cd fix: use smaller composition size 2024-01-06 17:23:27 -08:00
Bryce
5bbb09f69e build: vendorize facexlib
had too many unused sub-dependencies

also monkeypatch the download mechanism to use our standard download function
2024-01-06 17:23:27 -08:00
Bryce
4521d518ac version: 14.0.4 2024-01-05 06:35:47 -08:00
Bryce Drennan
d3106fc9e3
fix: videogen bug (#443) 2024-01-05 06:34:17 -08:00
jaydrennan
89bc1a9f1c
docs: adds docs tool, material for mkdocs, along with more fleshed ou… (#428)
* docs: adds docs tool, material for mkdocs, along with more fleshed out docstrings.

this includes ability to serve up a local docs website.


---------

Co-authored-by: Bryce <github20210803@accounts.brycedrennan.com>
2024-01-04 22:36:30 -07:00
Bryce Drennan
0271bffa38
build: remove fairscale dependency (#441) 2024-01-03 21:06:14 -08:00
Bryce
12e4855792 version: 14.0.3 2024-01-03 20:08:33 -08:00
Bryce
7ea4cba5d3 fix: add missing dependency. add package smoketest 2024-01-03 20:07:26 -08:00
Bryce
f866aaa44d version: 14.0.2 2024-01-03 19:37:44 -08:00
Bryce
e00c7b9eb7 fix: add back missing init file 2024-01-03 19:36:03 -08:00
Bryce
1598469179 version: 14.0.1 2024-01-03 09:02:26 -08:00
Bryce
d148bc1537 fix: progress latent collection bug 2024-01-03 09:01:30 -08:00
Bryce
fdb48399af docs: separate changelog. other small cleanup 2024-01-03 09:01:30 -08:00
Bryce
ed40a12c01 version: 14.0.0 2024-01-02 22:51:41 -08:00
Bryce
57dc27df8c build: tag imaginairy as typed 2024-01-02 22:02:31 -08:00
Bryce
55e27160f5 build: vendorize refiners
so we can still work in conda envs
2024-01-02 22:02:31 -08:00
Bryce
f84406f12c fix: handle unexpected keys in weights better 2024-01-02 20:51:05 -08:00
Bryce
5b3b04b877 build: remove pytorch lightning dependency 2024-01-02 20:51:05 -08:00
Bryce
9f33fa0664 version: 14.0.0b9 2024-01-01 20:17:39 -08:00
Bryce
35a5ccbcba fix: include weightmaps 2024-01-01 20:17:14 -08:00
Bryce
e90bf1b47d version: 14.0.0b8 2024-01-01 20:01:53 -08:00
Bryce
7100d3f9ea perf: make upscaler use fp16 for better efficiency 2024-01-01 19:59:31 -08:00
Bryce
4fcfc363af fix: always show total last 2024-01-01 19:59:31 -08:00
Bryce
f50a1f5b0c fix: interrupted generations don't prevent more generations
fixes #424

- pref: improve memory usage when loading SD15.
- feature: clean up CLI output more
- feature: cuda memory tracking context manager
- feature: use safetensors fp16 for sd15
2024-01-01 19:59:31 -08:00
Bryce
9e3403df89 feature: clean up terminal output
- recording timing and memory usage of various steps
- re-use logging context for composition images
- load sdxl weights in a more VRAM efficient way
- switch to diffusers weights for default weights for sd15
2024-01-01 15:15:31 -08:00
Bryce
0d78b8271f version: 14.0.0b7 2024-01-01 15:15:31 -08:00
Bryce Drennan
77c4b85037
perf: improve memory usage (#433)
add warning for corrupt weights files
2023-12-29 09:04:33 -08:00
Bryce
26d1ff9bc4 version: 14.0.0b6 2023-12-27 22:23:21 -08:00
Bryce Drennan
42a045e8e6
feature: support sdxl (#431)
- adds support for (SDXL)[https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0]
  - adds sliced encoding/decoding to refiners sdxl pipeline
  - doesn't support inpainting, controlnets
- monkeypatches self_attention_guidance to use sliced attention
- adds a bunch of model weight translation utilities and weightmaps
- add [opendalle 1.1](https://huggingface.co/dataautogpt3/OpenDalleV1.1)
- change default model to opendalle
- fix: better handle special characters in path inputs on command line
**todo**
- add tests
2023-12-27 21:52:37 -08:00
jaydrennan
fb7cff3684
Merge pull request #430 from brycedrennan/ruff_formatter
Ruff formatter
2023-12-27 18:51:15 -07:00
jaydrennan
3322777f5e refactor: formatting changes ruff formatter
ruff formatter has slight differences in formatting compared to black
2023-12-27 17:08:01 -08:00
jaydrennan
7eef3bf628 feature: replaces black formatter with ruff formatter 2023-12-27 17:08:01 -08:00
Bryce Drennan
a2c38b3ec0
feature: support loading diffusers folder/based models from huggingface (#427) 2023-12-21 14:24:35 -08:00
Bryce
50e796a3b7 refactor: move code around 2023-12-21 05:48:02 -08:00
Bryce
32b5175e0e feature: better upscaling
- use face enhancement in a smarter way that doesn't blur high-res images
- use a different upscale model for composition images

**Upscaling**
RealESRGAN is great but it blurs parts of images it doesn't understand

4xUltrasharp is a finetune of RealESRGan that isn't as good but doesn't have this blurry patch problem.  This makes it more suitable to use as part of the composition/upscale process.  We still use realesrgan for any last-step upscales since it does look better.

had to write a state dict translator to use the ultrasharp model

**Face Enhancement**

We no longer enhance faces that are larger than 512 pixels. They should already have enough details and the face enhancer doesn't produce faces at high enough resolution to look good at that size.
2023-12-21 05:48:02 -08:00
Bryce
6ebd12abb1 refactor: move code to more intuitive places 2023-12-21 05:48:02 -08:00
Bryce
8cfb46d6de fix: bug in sliced encoder 2023-12-21 05:48:02 -08:00
Bryce
372453e645 refactor: remove training code 2023-12-21 05:48:02 -08:00
Bryce Drennan
616f686ed2
small changes (#425)
* docs: update todo

* refactor: small cleanup of tiling code
2023-12-19 12:39:34 -08:00
Bryce
d834e8b5b3 test: run non-gpu tests on github 2023-12-18 21:24:59 -08:00
jaydrennan
df00109074 refactor: space formatting 2023-12-18 21:24:59 -08:00
jaydrennan
d39486af54 fix: updates test marking to use nodeid instead of name 2023-12-18 21:24:59 -08:00
jaydrennan
0c01cd690f
fix: sets correct default value for composition strength. (#422)
also corrects positional argument error by requiring _imagine_cmd to take key word arguments.
2023-12-18 17:31:57 -08:00
Bryce
0c03612d44 feature: large images now stay well-composed thanks to tile controlnet 2023-12-18 15:33:25 -08:00
jaydrennan
2372a71e6c fix: adds tile/detail controlnet back in. 2023-12-18 15:33:25 -08:00
Bryce Drennan
f88b5c1b2b
fix: word images still work without specified size (#421) 2023-12-18 15:09:23 -08:00
Bryce
7880ee1389 feature: update midas (depth maps) 2023-12-18 13:01:56 -08:00
Bryce
bf14ee6ee6 feature: add christmas-scene phrase list
Also add script that uses chatgpt to generate phrase-lists
2023-12-18 13:01:56 -08:00
Bryce
c6ac5f553a refactor: separate controlnet image preprocessing 2023-12-18 13:01:56 -08:00
Bryce
9a0e0cd1a7 feature: better depth maps 2023-12-18 13:01:56 -08:00
jaydrennan
2eee741b20
Merge pull request #418 from brycedrennan/fix_test_set_spu_full
fix: configures test_set_gpu_full to run on a m1 mac.
2023-12-17 23:50:46 -07:00
jaydrennan
c99a169986 fix: configures test_set_gpu_full to run on a m1 mac. 2023-12-17 22:35:12 -08:00
Bryce Drennan
2144f26fa7
feature: add ability to dynamically make word images (#417) 2023-12-16 22:08:19 -08:00
jaydrennan
3bd3dfdeaf
feature: adds --composition-strength parameter to cli (#416) 2023-12-16 14:40:06 -08:00
jaydrennan
41a9d7007b
test: adds tests for stablestudio (#415) 2023-12-16 12:00:03 -08:00
jaydrennan
e1e6f8037c
refactor: removes unused code and configurations (#405)
Co-authored-by: jaydrennan
2023-12-15 15:27:00 -08:00
Bryce
6d39d791b1 refactor: move safety to utils 2023-12-15 14:32:01 -08:00
Bryce
168a843f29 refactor: move colorize to api 2023-12-15 14:32:01 -08:00
Bryce
96f4268d44 refactor: move video_sample to api 2023-12-15 14:32:01 -08:00
Bryce
e72e8992ab refactor: create api module 2023-12-15 14:32:01 -08:00
Bryce
ad561e8833 refactor: move model_manager to utils 2023-12-15 14:32:01 -08:00
Bryce
d478771cc0 refactor: move a bunch of stuff to utils 2023-12-15 14:32:01 -08:00
Bryce
987af23abe refactor: move train.py 2023-12-15 14:32:01 -08:00
Bryce
0c456cd52a refactor: remove lr_scheduler.py 2023-12-15 14:32:01 -08:00
Bryce
01e32ff3f6 refactor: move bin files 2023-12-15 14:32:01 -08:00
Bryce
316114e660 docs: add docstrings
Wrote an openai script and custom prompt to generate them.
2023-12-15 14:32:01 -08:00
jaydrennan
e7b6fc40fa fix: adds default line ending for csv writing.
the csv library defaults to using CRLF line endings if not specified.
2023-12-14 21:12:25 -08:00
jaydrennan
3f3e080d39 feature: adds ability to use qrcode
feature: adds controlnet qrcode image generation.
feature: adds control net for qrcode image generation.
2023-12-14 21:12:25 -08:00
Bryce
62de446a92 ci: add mypy github action 2023-12-12 20:54:39 -08:00
Bryce
012cc648d3 style: fix all the mypy typing issues
...or ignore them.
2023-12-12 20:54:39 -08:00
Bryce
5a636e45c5 feature: skip composition at sizes slightly larger than model is expecting 2023-12-12 20:54:39 -08:00
Bryce
203747b14f refactor: simplify model_weights/architecture 2023-12-12 20:54:39 -08:00
Bryce
37ecd1e5e0 fix: videogen. track gpu tests 2023-12-12 20:54:39 -08:00
Bryce
eae4f20ae2 ci: add type checker
fix some typehint issues
2023-12-12 20:54:39 -08:00
Bryce
e898e3a799 fix: several cli commands, edit demo, negative prompt
- fix colorize cmd. add test
- fix describe cmd. add test
- fix model-list cmd. add test
- fix stable studio
- hide stack grace for ValueErrors in cli
- set controlnet scale
- fix negative prompt to allow emptystring instead of replacing it with default
- adjust edit-demo parameters
- arg scheduler that works at click level (but disable it). works but not ideal experience.
2023-12-12 20:54:39 -08:00
Bryce
c299cfffd9 fix: cache the controlnet models 2023-12-12 20:54:39 -08:00
Bryce
9b95e8b0b6 perf: improve cli startup time
- do not provide automatically imported api functions and objects in `imaginairy` root module
- horrible hack to overcome horrible design choices by easy_install/setuptools

The hack modifies the installed script to remove the __import__ pkg_resources line

If we don't do this then the scripts will be slow to start up because of
pkg_resources.require() which is called by setuptools to ensure the
"correct" version of the package is installed.

before modification example:
```
__requires__ = 'imaginAIry==14.0.0b5'
__import__('pkg_resources').require('imaginAIry==14.0.0b5')
__file__ = '/home/user/projects/imaginairy/imaginairy/bin/aimg'
with open(__file__) as f:
    exec(compile(f.read(), __file__, 'exec'))
```
2023-12-12 20:54:39 -08:00
Bryce
2bd6cb264b feature: large refactor
- add type hints
- size parameter
- ControlNetInput => ControlInput
- simplify imagineresult
2023-12-12 20:54:39 -08:00
Bryce
db85f0898a feature: remove training feature 2023-12-06 22:04:06 -08:00
jaydrennan
ef0f44646e feature: adds --control-strength as parameter for cli 2023-12-05 22:14:19 -08:00
Bryce
24f4af3482 feature: better torch installation experience 2023-12-05 21:46:55 -08:00
Bryce
71d4992dca feature: added --size parameter to allow using named sizes 2023-12-05 21:46:55 -08:00
Bryce
14ecf93c6a feature: add sliced self-attention for future use with video generation 2023-12-05 21:46:55 -08:00
Bryce
1136fdb939 version: 14.0.0b5 2023-12-03 15:48:36 -08:00
Bryce
6fcf0da331 test: try local runner 2023-12-03 09:13:01 -08:00
Bryce
0fe3733933 fix: memory management issue
the dtype being used as a cache key wasn't consistent. this caused the model to be loaded twice
2023-12-03 09:13:01 -08:00
Bryce
82c30024c9 feature: DDIM now default sampler
better output quality
2023-12-03 09:13:01 -08:00
Bryce
ba57393022 feature: patch refiners ScaledDotProductAttention for sliced attention 2023-12-03 09:13:01 -08:00
Bryce
1b15d6dcd4 feature: sliced image encoding for SD1Autoencoder 2023-12-03 09:13:01 -08:00
Bryce
b61d06651c tests: fix tests
- disable details mode. needs more work done to support
2023-12-03 09:13:01 -08:00
jaydrennan
cff17ef6f4
feature: adds text to video generation flag (#404)
Co-authored-by: jaydrennan
2023-11-27 18:00:56 -08:00
Bryce
8fec56970e version: 14.0.0b4 2023-11-25 16:25:14 -08:00
Bryce
8267482aad fix: remove padding approach
- padding didnt make animations better
- some changes to better support CPU generation (not yet working)
- better log output coloring
- better log messages when cuda not found
2023-11-25 16:18:44 -08:00
Bryce
6e1c44dae7 fix: add missing data files
fixes #402
2023-11-25 15:44:44 -08:00
Bryce
b9e245eb7d fix: remove cuda memory tracking 2023-11-24 12:39:49 -08:00
Bryce
3b2265f82d feature: add video progress bar 2023-11-24 12:33:36 -08:00
Bryce
b5a0e65f35 fix: edit mode and some controlnet tests 2023-11-24 09:10:12 -08:00
Bryce
b7fad562d0 feature: improved logging
- clean up some error messages
- add color
- indent tqdm bar
2023-11-24 08:27:36 -08:00
Bryce
07b097e001 version: 14.0.0b3 2023-11-23 12:25:35 -08:00
Bryce
a126e1c1d7 feature: better error message when file not found 2023-11-23 12:24:48 -08:00
Bryce
aa91d0a9c9 feature: autoresize and crop videos.
This means you can just stick any image into the video generator without worrying about the size.

- better generated video filenames
- output h264 video as well
2023-11-23 10:16:12 -08:00
Bryce
f6c9927d0c version: 14.0.0b2 2023-11-22 20:44:30 -08:00
Bryce
c24ed1f33d feature: videogen improvements
- better filenames
- allow urls as image inputs
- better memory efficiency
- add timing information
2023-11-22 20:43:32 -08:00
Bryce
ab75c49b15 docs: smaller animations 2023-11-22 18:33:28 -08:00
Bryce
ed23c7e1ca version: 14.0.0b1 2023-11-22 18:26:50 -08:00
Bryce
1da7043081 docs: update documentation 2023-11-22 18:26:50 -08:00
Bryce
e8fe8d7d6c feature: stable diffusion video (SVD) 2023-11-22 17:20:08 -08:00
jaydrennan
80ff006604 fix: updates weights_url's for controlnet 2023-11-22 17:18:22 -08:00
jaydrennan
e91a041a78 refactor: removes unused model configs 2023-11-22 17:14:44 -08:00
jaydrennan
76df1210f4 refactor: removes outdated info and spelling error 2023-11-22 17:14:44 -08:00
jaydrennan
d6d2f4b9dd refactor: updates README for 14.0.0
adds notes under currently broken features, removes deprecated features descriptions, and adds 14.0.0 changelog
2023-11-22 17:14:44 -08:00
Bryce
f97f6a3b4b feature: use refiners library for generation
BREAKING CHANGE

  - stable diffusion 1.5 + inpainting working
  - self-attention guidance working. improves image generation quality
  - tile-mode working
  - inpainting self-attention guidance working

disable/broken features:
  - sd 1.4, 2.0, 2.1
  - most of the samplers
  - pix2pix edit
  - most of the controlnets
  - memory management
  - python 3.8 support

wip
2023-11-22 13:22:00 -08:00
Bryce
6cd519cdb2 refactor: move code to avoid conflicts with "http" namespace 2023-10-10 23:03:01 -07:00
Bryce
703fb6e331 ci: faster pip install in github actions 2023-09-29 23:01:50 -07:00
Bryce
2273c9144d ci: smarter model caching in github actions 2023-09-29 23:01:50 -07:00
Bryce
db4f040536 tests: split test suite for faster overall runtime 2023-09-29 23:01:50 -07:00
Bryce
558d3388e5 style: speed up linting and autoformatting. fix lints 2023-09-29 23:01:50 -07:00
Bryce
460add16b8 version: 13.2.1 2023-09-29 01:14:14 -07:00
Bryce
8243ed616d fix: pydantic models for http server working now. Fixes #380 2023-09-29 00:40:46 -07:00
Bryce
ba51364a73 version: 13.2.0 2023-09-16 23:20:35 -07:00
Bryce
7c2004bfcc feature/fix: migrate to pydantic 2.3
- test: add schema tests/fuzzer and fixes
 - fix default prompt. add tests
 - fix outpaint and controlnet defaults
 - fix init image strength defaults
2023-09-16 23:09:18 -07:00
Bryce
8e956f5360 feature: add some helper functions 2023-09-16 23:09:18 -07:00
Bryce
f243006236 fix: allow tile_mode to be set to True or False for backward compatibility 2023-09-09 17:37:43 -07:00
Bryce
3b17d8b3ee version: 3.1.0 2023-09-09 15:50:48 -07:00
Bryce
477d161c91 build: better communicate lack of support for Python 3.11 2023-09-09 15:25:57 -07:00
Bryce
1354cb9ed1 build: add minimum package requirements
Should speed up dependency resolution.
2023-09-09 14:26:23 -07:00
Bryce
360546b779 fix: limit pydantic to <2.0 until we fix compatibility issues
also limit scipy to <1.11 doesn't support python 3.8
2023-09-09 14:26:23 -07:00
Bryce
48c51e34e7 docs: add discord link 2023-09-09 11:01:39 -07:00
Sam
69d5b78cba DOCS: Warn that Python 3.11 is not supported 2023-06-02 23:34:38 -07:00
Bryce
8bd652ebf4 docs: add pydantic model feature 2023-05-31 20:02:09 -07:00
Bryce
82d74c6b49 feature: switch to pydantic models
- allow prompt re-use by deferring random seed
2023-05-31 20:02:09 -07:00
Bryce
3fb0dcd891 fix: hide log warning 2023-05-31 20:02:09 -07:00
Bryce
c5c90df337 feature: fix debug level logging 2023-05-31 20:02:09 -07:00
Bryce
9e7a1db2c8 version: 13.0.1 2023-05-26 22:30:26 -07:00
Bryce
8ffb0fac0e fix: add routes to match stablestudio routes 2023-05-26 22:28:43 -07:00
Bryce
edafcc5529 feature: show full stack trace on server error 2023-05-26 22:28:43 -07:00
Bryce
37d2b21a22 build: require python < 3.11 2023-05-26 22:28:43 -07:00
Bryce
671aa86ad7 version: 13.0.0 2023-05-22 02:50:59 -07:00
Bryce
4991b22bc8 version: 13.0b0 2023-05-22 02:30:58 -07:00
Bryce
7b032c8e9a feature: StableStudio web interface
run `aimg server` and visit http://127.0.0.1:8000/
2023-05-22 02:24:05 -07:00
Bryce
8e28a2ed02 feature: API support for StableStudio 2023-05-22 02:24:05 -07:00
Bryce
e53459a50a build: check for torch version at runtime (fixes #329) 2023-05-20 17:29:06 -07:00
Bryce
758d574f8c build: specify proper Pillow minimum version (fixes #325) 2023-05-20 17:29:06 -07:00
Bryce
2a26495653 fix: package will not try to install xformers on aarch64 machines.
While this will allow the dockerfile to build on MacOS M1, [torch will not be able to use the M1 when generating images.](https://github.com/pytorch/pytorch/issues/81224#issuecomment-1499741152)
2023-05-20 17:29:06 -07:00
Bryce
14739bc90b feature: api server (alpha)
`aimg run-api-server`

Proof of concept for now
2023-05-20 17:29:06 -07:00
Bryce
38ac0b7f54 tests: fix tests 2023-05-20 13:09:00 -07:00
Bryce
39dffa9166 style: lintfix 2023-05-20 13:09:00 -07:00
Bryce
dc8f8d5a3d feature: add colorization controlnet. improve aimg colorize command 2023-05-20 13:09:00 -07:00
Bryce
7e62297f73 feature: add simple script to print structure of weight files
useful for debugging
2023-05-20 13:09:00 -07:00
Bryce
df25936d6f feature: automatic use of inpainting
feature disabled since controlnet inpainting doesn't work great. Was disabled by setting `inpaint_method="finetune",`
2023-05-20 13:09:00 -07:00
Bryce
d32e1060cd feature: multi-controlnet support at the command line
add controlnet option for edit demo
2023-05-20 13:09:00 -07:00
Bryce
bcaa000d35 fix: model logging 2023-05-20 13:09:00 -07:00
Bryce
926692ad03 tests: "prime" the controlnets
Trying to get things working on m1. doesn't fix everything
2023-05-20 13:09:00 -07:00
Bryce
fb19e34acc fix: allow use of upscaler on mps device 2023-05-20 13:09:00 -07:00
Bryce
3b066f8e29 fix: don't hide error messages during upscale 2023-05-20 13:09:00 -07:00
Bryce
eca97a25a0 tests: adjust tests to pass 2023-05-20 13:09:00 -07:00
Bryce
4c77fd376b feature: improvements to memory management
not thoroughly tested on low-memory devices
2023-05-20 13:09:00 -07:00
Bryce
6db296aa37 tests: faster tests 2023-05-20 11:35:49 -07:00
Adam Menges
6f9e9f64a9
docs: fix incorrect command in readme (#327) 2023-05-16 20:56:42 -07:00
Bryce
c082ea523f tests: update tests
- controlnet version changes + graphics card change
2023-05-13 23:41:00 -07:00
Bryce
a8aa9f703a build: exclude broken versions of timm as dependencies 2023-05-13 08:22:53 -07:00
Bryce
2a77b9b048 docs: add python prompt expansion example 2023-05-06 14:11:35 -07:00
Bryce
629c1c9ba9 docs: add cache clear instructions 2023-05-06 14:11:35 -07:00
Bryce
bffe1ffde0 docs: add python upscaling example 2023-05-06 14:11:35 -07:00
Bryce
e5d6880bc3 version: 12.0.2 2023-05-06 13:04:57 -07:00
Bryce
d5a276584b fix: move normal map code inline
Fixes conda package. Fixes #317
2023-05-06 13:01:50 -07:00
Bryce Drennan
726ffe48c9
docs: make controlnet sections linkable 2023-05-05 08:56:05 -07:00
673 changed files with 53935 additions and 6760 deletions

View File

@ -18,4 +18,6 @@ coverage.xml
.mypy_cache
.pytest_cache
.hypothesis
.DS_Store
.DS_Store
other
outputs

View File

@ -5,47 +5,73 @@ on:
branches:
- master
workflow_dispatch:
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: 3.9
cache: pip
cache-dependency-path: requirements-dev.txt
- name: Install dependencies
run: |
python -m pip install --disable-pip-version-check wheel pip-tools
pip-sync requirements-dev.txt
python -m pip install --disable-pip-version-check --no-deps .
- name: Lint
run: |
echo "::add-matcher::.github/pylama_matcher.json"
pylama --options tox.ini
autoformat:
runs-on: ubuntu-latest
- uses: actions/checkout@v3
- uses: actions/setup-python@v4.5.0
with:
python-version: "3.10"
- name: Cache dependencies
uses: actions/cache@v3.2.4
id: cache
with:
path: ${{ env.pythonLocation }}
key: ${{ env.pythonLocation }}-${{ hashFiles('requirements-dev.txt') }}-lint
- name: Install Ruff
if: steps.cache.outputs.cache-hit != 'true'
run: grep -E 'ruff==' requirements-dev.txt | xargs pip install
- name: Format
run: |
echo "::add-matcher::.github/pylama_matcher.json"
ruff format --config tests/ruff.toml . --check
- name: Lint
run: |
echo "::add-matcher::.github/pylama_matcher.json"
ruff check --config tests/ruff.toml .
test-gpu:
runs-on: nvidia-4090
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.9
python-version: "3.10"
- name: Install dependencies
run: |
python -m pip install --disable-pip-version-check black==23.1.0 isort==5.12.0
- name: Autoformatter
python -m pip install uv
uv pip uninstall --system torch torchvision xformers triton imaginairy
uv pip sync --system requirements-dev.txt
pip install -e .
- name: Test with pytest
timeout-minutes: 30
env:
CUDA_LAUNCH_BLOCKING: 1
run: |
black --diff .
isort --atomic --profile black --check-only .
test:
pytest --durations=10 -v -m "gputest"
test-non-gpu:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.10"]
python-version:
- "3.10"
- "3.11"
# torch for python 3.12 is not available yet
# https://github.com/pytorch/pytorch/issues/110436
# - "3.12"
os: ["ubuntu-latest"]
steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
@ -53,22 +79,156 @@ jobs:
cache-dependency-path: requirements-dev.txt
- name: Install dependencies
run: |
python -m pip install --disable-pip-version-check -r requirements-dev.txt
python -m pip install --disable-pip-version-check .
- name: Get current date
id: date
run: echo "::set-output name=curmonth::$(date +'%Y-%m')"
- name: Cache Model Files
id: cache-model-files
uses: actions/cache@v3
with:
path: |
~/.cache/huggingface
~/.cache/clip
~/.cache/imaginairy
~/.cache/torch
key: ${{ steps.date.outputs.curmonth }}-b
python -m pip install uv
uv pip sync --system requirements-dev.txt
pip install -e .
- name: Test with pytest
timeout-minutes: 20
timeout-minutes: 30
env:
CUDA_LAUNCH_BLOCKING: 1
run: |
pytest --durations=50 -v
pytest --durations=10 -v -m "not gputest"
type-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4.5.0
with:
python-version: "3.10"
cache: pip
cache-dependency-path: requirements-dev.txt
- name: Install dependencies
run: |
python -m pip install -r requirements-dev.txt . --upgrade
- name: Run mypy
run: |
make type-check
build-wheel:
name: Build Wheel
runs-on: ubuntu-latest
outputs:
wheel_name: ${{ steps.set_wheel_name.outputs.wheel_name }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4.5.0
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install wheel
- name: Build package
run: python setup.py bdist_wheel
- name: Set wheel filename
id: set_wheel_name
run: echo "wheel_name=$(ls dist/*.whl)" >> "$GITHUB_OUTPUT"
- uses: actions/upload-artifact@v3
with:
name: wheels
path: dist/*.whl
smoke-test-wheel:
needs: build-wheel
name: Smoketest (Python ${{ matrix.python-version }}, ${{ matrix.os }})
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11"]
os: ["ubuntu-latest", "windows-latest", "m2-16gb"]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4.5.0
with:
python-version: ${{ matrix.python-version }}
- uses: actions/download-artifact@v3
with:
name: wheels
path: dist
- name: Install built wheel
env:
WHEEL_FILENAME: ${{ needs.build-wheel.outputs.wheel_name }}
run: |
python -m pip install uv
uv pip install --system ${{ needs.build-wheel.outputs.wheel_name }}
- name: Generate an image
run: |
imagine fruit --steps 3 --size 128 --seed 1
- uses: actions/upload-artifact@v3
with:
name: images
path: outputs/generated/*.jpg
smoke-test-wheel-conda:
needs: build-wheel
name: Smoketest (Conda, Python ${{ matrix.python-version }}, ${{ matrix.os }})
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: [ "3.10", "3.11" ]
os: [ "ubuntu-latest", "windows-latest", "m2-16gb" ]
steps:
- uses: actions/checkout@v3
- uses: conda-incubator/setup-miniconda@v3
with:
auto-update-conda: true
python-version: ${{ matrix.python-version }}
activate-environment: test-env
create-env-file: true
- uses: actions/download-artifact@v3
with:
name: wheels
path: dist
- name: Install built wheel
shell: bash -l {0}
env:
WHEEL_FILENAME: ${{ steps.set-wheel-name.outputs.WHEEL_FILENAME }}
run: |
conda activate test-env
python -m pip install uv
uv pip install ${{ needs.build-wheel.outputs.wheel_name }}
- name: Generate an image
shell: bash -l {0}
run: |
conda activate test-env
imagine fruit --steps 3 --size 128 --seed 1
- uses: actions/upload-artifact@v3
with:
name: images
path: outputs/generated/*.jpg
# build-docs:
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v4
# - uses: actions/setup-python@v4
# with:
# python-version: "3.10"
# - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
# - uses: actions/cache@v3
# with:
# key: mkdocs-material-${{ env.cache_id }}
# path: .cache
# restore-keys: |
# mkdocs-material-
# - run: python -m pip install -r requirements-dev.in . --upgrade
# - run: mkdocs build --strict
publish-docs:
if: github.ref == 'refs/heads/master'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure Git Credentials
run: |
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
- uses: actions/setup-python@v4
with:
python-version: "3.10"
- run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV
- uses: actions/cache@v3
with:
key: mkdocs-material-${{ env.cache_id }}
path: .cache
restore-keys: |
mkdocs-material-
- run: python -m pip install -r requirements-dev.in . --upgrade
- run: mkdocs gh-deploy --force

3
.gitignore vendored
View File

@ -30,3 +30,6 @@ tests/vastai_cli.py
**/.eggs
/img_size_memory_usage.csv
/tests/test_cluster_output/
/.env
*.ipynb
.ipynb_checkpoints

122
Makefile
View File

@ -1,5 +1,5 @@
SHELL := /bin/bash
python_version = 3.10.10
python_version = 3.10.13
venv_prefix = imaginairy
venv_name = $(venv_prefix)-$(python_version)
pyenv_instructions=https://github.com/pyenv/pyenv#installation
@ -9,49 +9,63 @@ pyenv_virt_instructions=https://github.com/pyenv/pyenv-virtualenv#pyenv-virtuale
init: require_pyenv ## Setup a dev environment for local development.
@pyenv install $(python_version) -s
@echo -e "\033[0;32m ✔️ 🐍 $(python_version) installed \033[0m"
@if ! [ -d "$$(pyenv root)/versions/$(venv_name)" ]; then\
pyenv virtualenv $(python_version) $(venv_name);\
fi;
@if ! [ -d "$$(pyenv root)/versions/$(venv_name)" ]; then \
pyenv virtualenv $(python_version) $(venv_name); \
fi
@pyenv local $(venv_name)
@echo -e "\033[0;32m ✔️ 🐍 $(venv_name) virtualenv activated \033[0m"
pip install --upgrade pip pip-tools
pip-sync requirements-dev.txt
pip install -e . --no-deps
# the compiled requirements don't included OS specific subdependencies so we trigger those this way
pip install `pip freeze | grep "^torch=="`
@echo -e "\nEnvironment setup! ✨ 🍰 ✨ 🐍 \n\nCopy this path to tell PyCharm where your virtualenv is. You may have to click the refresh button in the pycharm file explorer.\n"
@echo -e "\033[0;32m"
@pyenv which python
@echo -e "\n\033[0m"
@echo -e "The following commands are available to run in the Makefile\n"
@export VIRTUAL_ENV=$$(pyenv prefix); \
if command -v uv >/dev/null 2>&1; then \
uv pip install --upgrade uv; \
else \
pip install --upgrade pip uv; \
fi; \
uv pip sync requirements-dev.txt; \
uv pip install -e .
@echo -e "\nEnvironment setup! ✨ 🍰 ✨ 🐍 \n\nCopy this path to tell PyCharm where your virtualenv is. You may have to click the refresh button in the PyCharm file explorer.\n"
@echo -e "\033[0;32m$$(pyenv which python)\033[0m\n"
@echo -e "The following commands are available to run in the Makefile:\n"
@make -s help
af: autoformat ## Alias for `autoformat`
autoformat: ## Run the autoformatter.
@pycln . --all --quiet --extend-exclude __init__\.py
@# ERA,T201
@-ruff --extend-ignore ANN,ARG001,C90,DTZ,D100,D101,D102,D103,D202,D203,D212,D415,E501,RET504,S101,UP006,UP007 --extend-select C,D400,I,W --unfixable T,ERA --fix-only .
@black .
@isort --atomic --profile black .
@-ruff check --config tests/ruff.toml . --fix-only
@ruff format --config tests/ruff.toml .
test: ## Run the tests.
@pytest
@echo -e "The tests pass! ✨ 🍰 ✨"
test-fast: ## Run the fast tests.
@pytest -m "not gputest"
@echo -e "The non-gpu tests pass! ✨ 🍰 ✨"
lint: ## Run the code linter.
@pylama
@ruff check --config tests/ruff.toml .
@echo -e "No linting errors - well done! ✨ 🍰 ✨"
type-check: ## Run the type checker.
@mypy --config-file tox.ini .
check-fast: ## Run autoformatter, linter, typechecker, and fast tests
@make autoformat
@make lint
@make type-check
@make test-fast
build-pkg: ## Build the package
python setup.py sdist bdist_wheel
python setup.py bdist_wheel --plat-name=win-amd64
deploy: ## Deploy the package to pypi.org
pip install twine wheel
-git tag $$(python setup.py -V)
git push --tags
rm -rf dist
python setup.py bdist_wheel
python setup.py bdist_wheel --plat-name=win-amd64
make build-pkg
#python setup.py sdist
@echo 'pypi.org Username: '
@read username && twine upload --verbose dist/* -u $$username;
@twine upload --verbose dist/* -u __token__;
rm -rf build
rm -rf dist
@echo "Deploy successful! ✨ 🍰 ✨"
@ -72,12 +86,24 @@ require_pyenv:
else\
echo -e "\033[0;32m ✔️ pyenv installed\033[0m";\
fi
@if ! [[ "$$(pyenv virtualenv --version)" == *"pyenv-virtualenv"* ]]; then\
echo -e '\n\033[0;31m ❌ pyenv virtualenv is not installed. Follow instructions here: $(pyenv_virt_instructions) \n\033[0m';\
exit 1;\
else\
echo -e "\033[0;32m ✔️ pyenv-virtualenv installed\033[0m";\
fi
.PHONY: docs
docs:
mkdocs serve
update-stablestudio:
@echo "Updating stablestudio"
cd ../imaginAIry-StableStudio && \
yarn build && \
yarn build:production
rm -rf imaginairy/http/stablestudio/dist
cp -R ../imaginAIry-StableStudio/packages/stablestudio-ui/dist imaginairy/http/stablestudio/dist
rm -rf imaginairy/http/stablestudio/dist/examples
rm -rf imaginairy/http/stablestudio/dist/media
rm -rf imaginairy/http/stablestudio/dist/presets
cp ../imaginAIry-StableStudio/LICENSE imaginairy/http/stablestudio/dist/LICENSE
@echo "Updated stablestudio"
vendor_openai_clip:
mkdir -p ./downloads
@ -171,13 +197,39 @@ vendorize_controlnet_annotators:
vendorize_surface_normal_uncertainty:
make download_repo REPO=git@github.com:baegwangbin/surface_normal_uncertainty.git PKG=surface_normal_uncertainty COMMIT=fe2b9f1e8a4cac1c73475b023f6454bd23827a48
mkdir -p ./imaginairy/vendored/surface_normal_uncertainty
rm -rf ./imaginairy/vendored/surface_normal_uncertainty/*
cp -R ./downloads/surface_normal_uncertainty/* ./imaginairy/vendored/surface_normal_uncertainty/
vendorize_normal_map:
make download_repo REPO=git@github.com:brycedrennan/imaginairy-normal-map.git PKG=imaginairy_normal_map COMMIT=6b3b1692cbdc21d55c84a01e0b7875df030b6d79
mkdir -p ./imaginairy/vendored/imaginairy_normal_map
rm -rf ./imaginairy/vendored/imaginairy_normal_map/*
cp -R ./downloads/imaginairy_normal_map/imaginairy_normal_map/* ./imaginairy/vendored/imaginairy_normal_map/
make af
vendorize_refiners:
export REPO=git@github.com:finegrain-ai/refiners.git PKG=refiners COMMIT=91aea9b7ff63ddf93f99e2ce6a4452bd658b1948 && \
make download_repo REPO=$$REPO PKG=$$PKG COMMIT=$$COMMIT && \
mkdir -p ./imaginairy/vendored/$$PKG && \
rm -rf ./imaginairy/vendored/$$PKG/* && \
cp -R ./downloads/refiners/src/refiners/* ./imaginairy/vendored/$$PKG/ && \
cp ./downloads/refiners/LICENSE ./imaginairy/vendored/$$PKG/ && \
rm -rf ./imaginairy/vendored/$$PKG/training_utils && \
echo "vendored from $$REPO @ $$COMMIT" | tee ./imaginairy/vendored/$$PKG/readme.txt
find ./imaginairy/vendored/refiners/ -type f -name "*.py" -exec sed -i '' 's/from refiners/from imaginairy.vendored.refiners/g' {} + &&\
find ./imaginairy/vendored/refiners/ -type f -name "*.py" -exec sed -i '' 's/import refiners/import imaginairy.vendored.refiners/g' {} + &&\
make af
vendorize_facexlib:
export REPO=git@github.com:xinntao/facexlib.git PKG=facexlib COMMIT=260620ae93990a300f4b16448df9bb459f1caba9 && \
make download_repo REPO=$$REPO PKG=$$PKG COMMIT=$$COMMIT && \
mkdir -p ./imaginairy/vendored/$$PKG && \
rm -rf ./imaginairy/vendored/$$PKG/* && \
cp -R ./downloads/$$PKG/facexlib/* ./imaginairy/vendored/$$PKG/ && \
rm -rf ./imaginairy/vendored/$$PKG/weights && \
cp ./downloads/$$PKG/LICENSE ./imaginairy/vendored/$$PKG/ && \
echo "vendored from $$REPO @ $$COMMIT" | tee ./imaginairy/vendored/$$PKG/readme.txt
find ./imaginairy/vendored/facexlib/ -type f -name "*.py" -exec sed -i '' 's/from facexlib/from imaginairy.vendored.facexlib/g' {} + &&\
sed -i '' '/from \.version import __gitsha__, __version__/d' ./imaginairy/vendored/facexlib/__init__.py
make af
vendorize: ## vendorize a github repo. `make vendorize REPO=git@github.com:openai/CLIP.git PKG=clip`
mkdir -p ./downloads

644
README.md
View File

@ -4,31 +4,134 @@
[![Downloads](https://pepy.tech/badge/imaginairy)](https://pepy.tech/project/imaginairy)
[![image](https://img.shields.io/pypi/v/imaginairy.svg)](https://pypi.org/project/imaginairy/)
[![image](https://img.shields.io/badge/license-MIT-green)](https://github.com/brycedrennan/imaginAIry/blob/master/LICENSE/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black)
[![Python Checks](https://github.com/brycedrennan/imaginAIry/actions/workflows/ci.yaml/badge.svg)](https://github.com/brycedrennan/imaginAIry/actions/workflows/ci.yaml)
[![Discord](https://flat.badgen.net/discord/members/FdD7ut3YjW)](https://discord.gg/FdD7ut3YjW)
AI imagined images. Pythonic generation of stable diffusion images.
AI imagined images. Pythonic generation of stable diffusion images **and videos** *!.
"just works" on Linux and macOS(M1) (and maybe windows?).
"just works" on Linux and macOS(M1) (and sometimes windows).
```bash
# on macOS, make sure rust is installed first
# be sure to use Python 3.10, Python 3.11 is not supported at the moment
>> pip install imaginairy
>> imagine "a scenic landscape" "a photo of a dog" "photo of a fruit bowl" "portrait photo of a freckled woman" "a bluejay"
# Make an animation showing the generation process
>> imagine --gif "a flower"
# Make an AI video
>> aimg videogen --start-image rocket.png
```
## Stable Video Diffusion
<p float="left">
<img src="assets/000019_786355545_PLMS50_PS7.5_a_scenic_landscape.jpg" height="256">
<img src="assets/000032_337692011_PLMS40_PS7.5_a_photo_of_a_dog.jpg" height="256">
<img src="assets/000056_293284644_PLMS40_PS7.5_photo_of_a_bowl_of_fruit.jpg" height="256">
<img src="assets/000078_260972468_PLMS40_PS7.5_portrait_photo_of_a_freckled_woman.jpg" height="256">
<img src="assets/013986_1_kdpmpp2m59_PS7.5_a_bluejay_[generated].jpg" height="256">
<img src="assets/009719_942389026_kdpmpp2m15_PS7.5_a_flower.gif" height="256">
<img src="docs/assets/svd-rocket.gif" height="190">
<img src="docs/assets/svd-athens.gif" height="190">
<img src="docs/assets/svd-pearl-girl.gif" height="190">
<img src="docs/assets/svd-starry-night.gif" height="190">
<img src="docs/assets/svd-dog.gif" height="190">
<img src="docs/assets/svd-xpbliss.gif" height="190">
</p>
### Rushed release of Stable Diffusion Video!
Works with Nvidia GPUs. Does not work on Mac or CPU.
On Windows you'll need to install torch 2.0 first via https://pytorch.org/get-started/locally/
```text
Usage: aimg videogen [OPTIONS]
AI generate a video from an image
Example:
aimg videogen --start-image assets/rocket-wide.png
Options:
--start-image TEXT Input path for image file.
--num-frames INTEGER Number of frames.
--num-steps INTEGER Number of steps.
--model TEXT Model to use. One of: svd, svd_xt, svd_image_decoder, svd_xt_image_decoder
--fps INTEGER FPS for the AI to target when generating video
--output-fps INTEGER FPS for the output video
--motion-amount INTEGER How much motion to generate. value between 0 and 255.
-r, --repeats INTEGER How many times to repeat the renders. [default: 1]
--cond-aug FLOAT Conditional augmentation.
--seed INTEGER Seed for random number generator.
--decoding_t INTEGER Number of frames decoded at a time.
--output_folder TEXT Output folder.
--help Show this message and exit.
```
### Images
<p float="left">
<img src="docs/assets/026882_1_ddim50_PS7.5_a_scenic_landscape_[generated].jpg" height="256">
<img src="docs/assets/026884_1_ddim50_PS7.5_photo_of_a_dog_[generated].jpg" height="256">
<img src="docs/assets/026890_1_ddim50_PS7.5_photo_of_a_bowl_of_fruit._still_life_[generated].jpg" height="256">
<img src="docs/assets/026885_1_ddim50_PS7.5_girl_with_a_pearl_earring_[generated].jpg" height="256">
<img src="docs/assets/026891_1_ddim50_PS7.5_close-up_photo_of_a_bluejay_[generated].jpg" height="256">
<img src="docs/assets/026893_1_ddim50_PS7.5_macro_photo_of_a_flower_[generated].jpg" height="256">
</p>
### Whats New
[See full Changelog here](./docs/changelog.md)
**14.3.0**
- feature: integrates [spandrel](https://github.com/chaiNNer-org/spandrel) for upscaling
- fix: allow loading sdxl models from local paths.
**14.2.0**
- 🎉 feature: add image prompt support via `--image-prompt` and `--image-prompt-strength`
**14.1.1**
- tests: add installation tests for windows, mac, and conda
- fix: dependency issues
**14.1.0**
- 🎉 feature: make video generation smooth by adding frame interpolation
- feature: SDXL weights in the compvis format can now be used
- feature: allow video generation at any size specified by user
- feature: video generations output in "bounce" format
- feature: choose video output format: mp4, webp, or gif
- feature: fix random seed handling in video generation
- docs: auto-publish docs on push to master
- build: remove imageio dependency
- build: vendorize facexlib so we don't install its unneeded dependencies
**14.0.4**
- docs: add a documentation website at https://brycedrennan.github.io/imaginAIry/
- build: remove fairscale dependency
- fix: video generation was broken
**14.0.3**
- fix: several critical bugs with package
- tests: add a wheel smoketest to detect these issues in the future
**14.0.0**
- 🎉 video generation using [Stable Video Diffusion](https://github.com/Stability-AI/generative-models)
- add `--videogen` to any image generation to create a short video from the generated image
- or use `aimg videogen` to generate a video from an image
- 🎉 SDXL (Stable Diffusion Extra Large) models are now supported.
- try `--model opendalle` or `--model sdxl`
- inpainting and controlnets are not yet supported for SDXL
- 🎉 imaginairy is now backed by the [refiners library](https://github.com/finegrain-ai/refiners)
- This was a huge rewrite which is why some features are not yet supported. On the plus side, refiners supports
cutting edge features (SDXL, image prompts, etc) which will be added to imaginairy soon.
- [self-attention guidance](https://github.com/SusungHong/Self-Attention-Guidance) which makes details of images more accurate
- 🎉 feature: larger image generations now work MUCH better and stay faithful to the same image as it looks at a smaller size.
For example `--size 720p --seed 1` and `--size 1080p --seed 1` will produce the same image for SD15
- 🎉 feature: loading diffusers based models now supported. Example `--model https://huggingface.co/ainz/diseny-pixar --model-architecture sd15`
- 🎉 feature: qrcode controlnet!
### Run API server and StableStudio web interface (alpha)
Generate images via API or web interface. Much smaller featureset compared to the command line tool.
```bash
>> aimg server
```
Visit http://localhost:8000/ and http://localhost:8000/docs
<img src="https://github.com/Stability-AI/StableStudio/blob/a65d4877ad7d309627808a169818f1add8c278ae/misc/GenerateScreenshot.png?raw=true" width="512">
### Image Structure Control [by ControlNet](https://github.com/lllyasviel/ControlNet)
#### (Not supported for SDXL yet)
Generate images guided by body poses, depth maps, canny edges, hed boundaries, or normal maps.
**Openpose Control**
@ -38,60 +141,60 @@ imagine --control-image assets/indiana.jpg --control-mode openpose --caption-te
```
<p float="left">
<img src="assets/indiana.jpg" height="256">
<img src="assets/indiana-pose.jpg" height="256">
<img src="assets/indiana-pose-polar-bear.jpg" height="256">
<img src="docs/assets/indiana.jpg" height="256">
<img src="docs/assets/indiana-pose.jpg" height="256">
<img src="docs/assets/indiana-pose-polar-bear.jpg" height="256">
</p>
**Canny Edge Control**
#### Canny Edge Control
```bash
imagine --control-image assets/lena.png --control-mode canny "photo of a woman with a hat looking at the camera"
```
<p float="left">
<img src="assets/lena.png" height="256">
<img src="assets/lena-canny.jpg" height="256">
<img src="assets/lena-canny-generated.jpg" height="256">
<img src="docs/assets/lena.png" height="256">
<img src="docs/assets/lena-canny.jpg" height="256">
<img src="docs/assets/lena-canny-generated.jpg" height="256">
</p>
**HED Boundary Control**
#### HED Boundary Control
```bash
imagine --control-image dog.jpg --control-mode hed "photo of a dalmation"
```
<p float="left">
<img src="assets/000032_337692011_PLMS40_PS7.5_a_photo_of_a_dog.jpg" height="256">
<img src="assets/dog-hed-boundary.jpg" height="256">
<img src="assets/dog-hed-boundary-dalmation.jpg" height="256">
<img src="docs/assets/000032_337692011_PLMS40_PS7.5_a_photo_of_a_dog.jpg" height="256">
<img src="docs/assets/dog-hed-boundary.jpg" height="256">
<img src="docs/assets/dog-hed-boundary-dalmation.jpg" height="256">
</p>
**Depth Map Control**
#### Depth Map Control
```bash
imagine --control-image fancy-living.jpg --control-mode depth "a modern living room"
```
<p float="left">
<img src="assets/fancy-living.jpg" height="256">
<img src="assets/fancy-living-depth.jpg" height="256">
<img src="assets/fancy-living-depth-generated.jpg" height="256">
<img src="docs/assets/fancy-living.jpg" height="256">
<img src="docs/assets/fancy-living-depth.jpg" height="256">
<img src="docs/assets/fancy-living-depth-generated.jpg" height="256">
</p>
**Normal Map Control**
#### Normal Map Control
```bash
imagine --control-image bird.jpg --control-mode normal "a bird"
```
<p float="left">
<img src="assets/013986_1_kdpmpp2m59_PS7.5_a_bluejay_[generated].jpg" height="256">
<img src="assets/bird-normal.jpg" height="256">
<img src="assets/bird-normal-generated.jpg" height="256">
<img src="docs/assets/013986_1_kdpmpp2m59_PS7.5_a_bluejay_[generated].jpg" height="256">
<img src="docs/assets/bird-normal.jpg" height="256">
<img src="docs/assets/bird-normal-generated.jpg" height="256">
</p>
**Image Shuffle Control**
#### Image Shuffle Control
Generates the image based on elements of the control image. Kind of similar to style transfer.
```bash
@ -99,12 +202,12 @@ imagine --control-image pearl-girl.jpg --control-mode shuffle "a clown"
```
The middle image is the "shuffled" input image
<p float="left">
<img src="assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="assets/pearl_shuffle_019331_1_kdpmpp2m15_PS7.5_img2img-0.0_a_clown.jpg" height="256">
<img src="assets/pearl_shuffle_clown_019331_1_kdpmpp2m15_PS7.5_img2img-0.0_a_clown.jpg" height="256">
<img src="docs/assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="docs/assets/pearl_shuffle_019331_1_kdpmpp2m15_PS7.5_img2img-0.0_a_clown.jpg" height="256">
<img src="docs/assets/pearl_shuffle_clown_019331_1_kdpmpp2m15_PS7.5_img2img-0.0_a_clown.jpg" height="256">
</p>
**Edit Instructions Control**
#### Editing Instructions Control
Similar to instructPix2Pix (below) but works with any SD 1.5 based model.
```bash
@ -112,36 +215,52 @@ imagine --control-image pearl-girl.jpg --control-mode edit --init-image-strengt
```
<p float="left">
<img src="assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="assets/pearl_anime_019537_521829407_kdpmpp2m30_PS9.0_img2img-0.01_make_it_anime.jpg" height="256">
<img src="assets/pearl_beach_019561_862735879_kdpmpp2m30_PS7.0_img2img-0.01_make_it_at_the_beach.jpg" height="256">
<img src="docs/assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="docs/assets/pearl_anime_019537_521829407_kdpmpp2m30_PS9.0_img2img-0.01_make_it_anime.jpg" height="256">
<img src="docs/assets/pearl_beach_019561_862735879_kdpmpp2m30_PS7.0_img2img-0.01_make_it_at_the_beach.jpg" height="256">
</p>
**Add Details Control (upscaling/super-resolution)**
#### Add Details Control (upscaling/super-resolution)
Replaces existing details in an image. Good to use with --init-image-strength 0.2
```bash
imagine --control-image "assets/wishbone.jpg" --control-mode tile "sharp focus, high-resolution" --init-image-strength 0.2 --steps 30 -w 2048 -h 2048
imagine --control-image "assets/wishbone.jpg" --control-mode details "sharp focus, high-resolution" --init-image-strength 0.2 --steps 30 -w 2048 -h 2048
```
<p float="left">
<img src="assets/wishbone_headshot_badscale.jpg" height="256">
<img src="assets/wishbone_headshot_details.jpg" height="256">
<img src="docs/assets/wishbone_headshot_badscale.jpg" height="256">
<img src="docs/assets/wishbone_headshot_details.jpg" height="256">
</p>
### Image (re)Colorization (using brightness control)
Colorize black and white images or re-color existing images.
The generated colors will be applied back to the original image. You can either provide a caption or
allow the tool to generate one for you.
```bash
aimg colorize pearl-girl.jpg --caption "photo of a woman"
```
<p float="left">
<img src="docs/assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="docs/assets/pearl-gray.jpg" height="256">
<img src="docs/assets/pearl-recolor-a.jpg" height="256">
</p>
### Instruction based image edits [by InstructPix2Pix](https://github.com/timothybrooks/instruct-pix2pix)
#### (Broken as of 14.0.0)
Just tell imaginairy how to edit the image and it will do it for you!
<p float="left">
<img src="assets/scenic_landscape_winter.jpg" height="256">
<img src="assets/dog_red.jpg" height="256">
<img src="assets/bowl_of_fruit_strawberries.jpg" height="256">
<img src="assets/freckled_woman_cyborg.jpg" height="256">
<img src="assets/014214_51293814_kdpmpp2m30_PS10.0_img2img-1.0_make_the_bird_wear_a_cowboy_hat_[generated].jpg" height="256">
<img src="assets/flower-make-the-flower-out-of-paper-origami.gif" height="256">
<img src="assets/girl-pearl-clown-compare.gif" height="256">
<img src="assets/mona-lisa-headshot-anim.gif" height="256">
<img src="assets/make-it-night-time.gif" height="256">
<img src="docs/assets/scenic_landscape_winter.jpg" height="256">
<img src="docs/assets/dog_red.jpg" height="256">
<img src="docs/assets/bowl_of_fruit_strawberries.jpg" height="256">
<img src="docs/assets/freckled_woman_cyborg.jpg" height="256">
<img src="docs/assets/014214_51293814_kdpmpp2m30_PS10.0_img2img-1.0_make_the_bird_wear_a_cowboy_hat_[generated].jpg" height="256">
<img src="docs/assets/flower-make-the-flower-out-of-paper-origami.gif" height="256">
<img src="docs/assets/girl-pearl-clown-compare.gif" height="256">
<img src="docs/assets/mona-lisa-headshot-anim.gif" height="256">
<img src="docs/assets/make-it-night-time.gif" height="256">
</p>
<details>
@ -174,12 +293,12 @@ Want just quickly have some fun? Try `edit-demo` to apply some pre-defined edits
>> aimg edit-demo pearl_girl.jpg
```
<p float="left">
<img src="assets/girl_with_a_pearl_earring_suprise.gif" height="256">
<img src="assets/mona-lisa-suprise.gif" height="256">
<img src="assets/luke-suprise.gif" height="256">
<img src="assets/spock-suprise.gif" height="256">
<img src="assets/gg-bridge-suprise.gif" height="256">
<img src="assets/shire-suprise.gif" height="256">
<img src="docs/assets/girl_with_a_pearl_earring_suprise.gif" height="256">
<img src="docs/assets/mona-lisa-suprise.gif" height="256">
<img src="docs/assets/luke-suprise.gif" height="256">
<img src="docs/assets/spock-suprise.gif" height="256">
<img src="docs/assets/gg-bridge-suprise.gif" height="256">
<img src="docs/assets/shire-suprise.gif" height="256">
</p>
@ -207,11 +326,11 @@ When writing strength modifiers keep in mind that pixel values are between 0 and
--fix-faces \
"a modern female president" "a female robot" "a female doctor" "a female firefighter"
```
<img src="assets/mask_examples/pearl000.jpg" height="200">➡️
<img src="assets/mask_examples/pearl_pres.png" height="200">
<img src="assets/mask_examples/pearl_robot.png" height="200">
<img src="assets/mask_examples/pearl_doctor.png" height="200">
<img src="assets/mask_examples/pearl_firefighter.png" height="200">
<img src="docs/assets/mask_examples/pearl000.jpg" height="200">➡️
<img src="docs/assets/mask_examples/pearl_pres.png" height="200">
<img src="docs/assets/mask_examples/pearl_robot.png" height="200">
<img src="docs/assets/mask_examples/pearl_doctor.png" height="200">
<img src="docs/assets/mask_examples/pearl_firefighter.png" height="200">
```bash
>> imagine \
@ -222,11 +341,11 @@ When writing strength modifiers keep in mind that pixel values are between 0 and
--init-image-strength .1 \
"a bowl of kittens" "a bowl of gold coins" "a bowl of popcorn" "a bowl of spaghetti"
```
<img src="assets/000056_293284644_PLMS40_PS7.5_photo_of_a_bowl_of_fruit.jpg" height="200">➡️
<img src="assets/mask_examples/bowl004.jpg" height="200">
<img src="assets/mask_examples/bowl001.jpg" height="200">
<img src="assets/mask_examples/bowl002.jpg" height="200">
<img src="assets/mask_examples/bowl003.jpg" height="200">
<img src="docs/assets/000056_293284644_PLMS40_PS7.5_photo_of_a_bowl_of_fruit.jpg" height="200">➡️
<img src="docs/assets/mask_examples/bowl004.jpg" height="200">
<img src="docs/assets/mask_examples/bowl001.jpg" height="200">
<img src="docs/assets/mask_examples/bowl002.jpg" height="200">
<img src="docs/assets/mask_examples/bowl003.jpg" height="200">
### Face Enhancement [by CodeFormer](https://github.com/sczhou/CodeFormer)
@ -238,41 +357,64 @@ When writing strength modifiers keep in mind that pixel values are between 0 and
<img src="https://github.com/brycedrennan/imaginAIry/raw/master/assets/000178_1_PLMS40_PS7.5_a_couple_smiling_fixed.png" height="256">
### Upscaling [by RealESRGAN](https://github.com/xinntao/Real-ESRGAN)
```bash
>> imagine "colorful smoke" --steps 40 --upscale
# upscale an existing image
>> aimg upscale my-image.jpg
## Image Upscaling
Upscale images easily.
=== "CLI"
```bash
aimg upscale assets/000206_856637805_PLMS40_PS7.5_colorful_smoke.jpg --upscale-model real-hat
```
=== "Python"
```py
from imaginairy.api.upscale import upscale
img = upscale(img="assets/000206_856637805_PLMS40_PS7.5_colorful_smoke.jpg")
img.save("colorful_smoke.upscaled.jpg")
```
<img src="docs/assets/000206_856637805_PLMS40_PS7.5_colorful_smoke.jpg" width="25%" height="auto"> ➡️
<img src="docs/assets/000206_856637805_PLMS40_PS7.5_colorful_smoke_upscaled.jpg" width="50%" height="auto">
Upscaling uses [Spandrel](https://github.com/chaiNNer-org/spandrel) to make it easy to use different upscaling models.
You can view different integrated models by running `aimg upscale --list-models`, and then use it with `--upscale-model <model-name>`.
Also accepts url's if you want to upscale an image with a different model. Control the new file format/location with --format.
```python
from imaginairy.enhancers.upscale_realesrgan import upscale_image
from PIL import Image
img = Image.open("my-image.jpg")
big_img = upscale_image(i)
```
<img src="https://github.com/brycedrennan/imaginAIry/raw/master/assets/000206_856637805_PLMS40_PS7.5_colorful_smoke.jpg" height="128"> ➡️
<img src="https://github.com/brycedrennan/imaginAIry/raw/master/assets/000206_856637805_PLMS40_PS7.5_colorful_smoke_upscaled.jpg" height="256">
### Tiled Images
```bash
>> imagine "gold coins" "a lush forest" "piles of old books" leaves --tile
```
<img src="assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128"><img src="assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128"><img src="assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128">
<img src="assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128"><img src="assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128"><img src="assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128">
<img src="docs/assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128"><img src="docs/assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128"><img src="docs/assets/000066_801493266_PLMS40_PS7.5_gold_coins.jpg" height="128">
<img src="docs/assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128"><img src="docs/assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128"><img src="docs/assets/000118_597948545_PLMS40_PS7.5_a_lush_forest.jpg" height="128">
<br>
<img src="assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128"><img src="assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128"><img src="assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128">
<img src="assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128"><img src="assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128"><img src="assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128">
<img src="docs/assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128"><img src="docs/assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128"><img src="docs/assets/000075_961095192_PLMS40_PS7.5_piles_of_old_books.jpg" height="128">
<img src="docs/assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128"><img src="docs/assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128"><img src="docs/assets/000040_527733581_PLMS40_PS7.5_leaves.jpg" height="128">
#### 360 degree images
```bash
imagine --tile-x -w 1024 -h 512 "360 degree equirectangular panorama photograph of the desert" --upscale
```
<img src="assets/desert_360.jpg" height="128">
<img src="docs/assets/desert_360.jpg" height="128">
### Image-to-Image
Use depth maps for amazing "translations" of existing images.
```bash
>> imagine --model SD-2.0-depth --init-image girl_with_a_pearl_earring_large.jpg --init-image-strength 0.05 "professional headshot photo of a woman with a pearl earring" -r 4 -w 1024 -h 1024 --steps 50
>> imagine --init-image girl_with_a_pearl_earring_large.jpg --init-image-strength 0.05 "professional headshot photo of a woman with a pearl earring" -r 4 -w 1024 -h 1024 --steps 50
```
<p float="left">
<img src="tests/data/girl_with_a_pearl_earring.jpg" width="256"> ➡️
<img src="assets/pearl_depth_1.jpg" width="256">
<img src="assets/pearl_depth_2.jpg" width="256">
<img src="docs/assets/pearl_depth_1.jpg" width="256">
<img src="docs/assets/pearl_depth_2.jpg" width="256">
</p>
@ -289,12 +431,9 @@ Example:
### Work with different generation models
<p float="left">
<img src="assets/fairytale-treehouse-sd14.jpg" height="256">
<img src="assets/fairytale-treehouse-sd15.jpg" height="256">
<img src="assets/fairytale-treehouse-sd20.jpg" height="256">
<img src="assets/fairytale-treehouse-sd21.jpg" height="256">
<img src="assets/fairytale-treehouse-openjourney-v1.jpg" height="256">
<img src="assets/fairytale-treehouse-openjourney-v2.jpg" height="256">
<img src="docs/assets/fairytale-treehouse-sd15.jpg" height="256">
<img src="docs/assets/fairytale-treehouse-openjourney-v1.jpg" height="256">
<img src="docs/assets/fairytale-treehouse-openjourney-v2.jpg" height="256">
</p>
<details>
@ -323,15 +462,27 @@ You can use `{}` to randomly pull values from lists. A list of values separated
`imagine "a {lime|blue|silver|aqua} colored dog" -r 4 --seed 0` (note that it generates a dog of each color without repetition)
<img src="assets/000184_0_plms40_PS7.5_a_silver_colored_dog_[generated].jpg" height="200"><img src="assets/000186_0_plms40_PS7.5_a_aqua_colored_dog_[generated].jpg" height="200">
<img src="assets/000210_0_plms40_PS7.5_a_lime_colored_dog_[generated].jpg" height="200">
<img src="assets/000211_0_plms40_PS7.5_a_blue_colored_dog_[generated].jpg" height="200">
<img src="docs/assets/000184_0_plms40_PS7.5_a_silver_colored_dog_[generated].jpg" height="200"><img src="docs/assets/000186_0_plms40_PS7.5_a_aqua_colored_dog_[generated].jpg" height="200">
<img src="docs/assets/000210_0_plms40_PS7.5_a_lime_colored_dog_[generated].jpg" height="200">
<img src="docs/assets/000211_0_plms40_PS7.5_a_blue_colored_dog_[generated].jpg" height="200">
`imagine "a {_color_} dog" -r 4 --seed 0` will generate four, different colored dogs. The colors will be pulled from an included
phraselist of colors.
`imagine "a {_spaceship_|_fruit_|hot air balloon}. low-poly" -r 4 --seed 0` will generate images of spaceships or fruits or a hot air balloon
<details>
<summary>Python example</summary>
```python
from imaginairy.enhancers.prompt_expansion import expand_prompts
my_prompt = "a giant {_animal_}"
expanded_prompts = expand_prompts(n=10, prompt_text=my_prompt, prompt_library_paths=["./prompts"])
```
</details>
Credit to [noodle-soup-prompts](https://github.com/WASasquatch/noodle-soup-prompts/) where most, but not all, of the wordlists originate.
### Generate image captions (via [BLIP](https://github.com/salesforce/BLIP))
@ -369,8 +520,7 @@ a bowl full of gold bars sitting on a table
- Prompt metadata saved into image file metadata
- Have AI generate captions for images `aimg describe <filename-or-url>`
- Interactive prompt: just run `aimg`
- finetune your own image model. kind of like dreambooth. Read instructions on ["Concept Training"](docs/concept-training.md) page
## How To
For full command line instructions run `aimg --help`
@ -430,311 +580,21 @@ docker run -it --gpus all -v $HOME/.cache/huggingface:/root/.cache/huggingface -
## Q&A
**Q**: How do I change the cache directory for where models are stored?
#### Q: How do I change the cache directory for where models are stored?
**A**: Set the `HUGGINGFACE_HUB_CACHE` environment variable.
A: Set the `HUGGINGFACE_HUB_CACHE` environment variable.
#### Q: How do I free up disk space?
## ChangeLog
A: The AI models are cached in `~/.cache/` (or `HUGGINGFACE_HUB_CACHE`). To delete the cache remove the following folders:
- ~/.cache/imaginairy
- ~/.cache/clip
- ~/.cache/torch
- ~/.cache/huggingface
**12.0.1**
- fix: use correct device for depth images on mps. Fixes #300
**12.0.0**
- 🎉 feature: add "detail" control mode. Add details to an image. Great for upscaling an image.
- 🎉 feature: add "edit" control mode. Edit images using text instructions with any SD 1.5 based model. Similar to instructPix2Pix.
- 🎉 feature: add "shuffle" control mode. Image is generated from elements of control image. Similar to style transfer.
- 🎉 feature: upgrade to [controlnet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly)
- 🎉 fix: controlnet now works with all SD 1.5 based models
- feature: add openjourney-v4
- fix: raw control images are now properly loaded. fixes #296
- fix: filenames start numbers after latest image, even if some previous images were deleted
**11.1.1**
- fix: fix globbing bug with input image path handling
- fix: changed sample to True to generate caption using blip model
**11.1.0**
- docs: add some example use cases
- feature: add art-scene, desktop-background, interior-style, painting-style phraselists
- fix: compilation animations create normal slideshows instead of "bounces"
- fix: file globbing works in the interactive shell
- fix: fix model downloads that were broken by [library change in transformers 4.27.0](https://github.com/huggingface/transformers/commit/8f3b4a1d5bd97045541c43179efe8cd9c58adb76)
**11.0.0**
- all these changes together mean same seed/sampler will not be guaranteed to produce same image (thus the version bump)
- fix: image composition didn't work very well. Works well now but probably very slow on non-cuda platforms
- fix: remove upscaler tiling message
- fix: improve k-diffusion sampler schedule. significantly improves image quality of default sampler
- fix: img2img was broken for all samplers except plms and ddim when init image strength was >~0.25
**10.2.0**
- feature: input raw control images (a pose, canny map, depth map, etc) directly using `--control-image-raw`
This is opposed to current behavior of extracting the control signal from an input image via `--control-image`
- feature: `aimg model-list` command lists included models
- feature: system memory added to `aimg system-info` command
- feature: add `--fix-faces` options to `aimg upscale` command
- fix: add missing metadata attributes to generated images
- fix: image composition step was producing unnecessarily blurry images
- refactor: split `aimg` cli code into multiple files
- docs: pypi docs now link properly to github automatically
**10.1.0**
- feature: 🎉 ControlNet integration! Control the structure of generated images.
- feature: `aimg colorize` attempts to use controlnet to colorize images
- feature: `--caption-text` command adds text at the bottom left of an image
**10.0.1**
- fix: `edit` was broken
**10.0.0**
- feature: 🎉🎉 Make large images while retaining composition. Try `imagine "a flower" -w 1920 -h 1080`
- fix: create compilations directory automatically
- perf: sliced encoding of images to latents (removes memory bottleneck)
- perf: use Silu for performance improvement over nonlinearity
- perf: `xformers` added as a dependency for linux and windows. Gives a nice speed boost.
- perf: sliced attention now runs on MacOS. A typo prevented that from happening previously.
- perf: sliced latent decoding - now possible to make much bigger images. 3310x3310 on 11 GB GPU.
**9.0.2**
- fix: edit interface was broken
**9.0.1**
- fix: use entry_points for windows since setup.py scripts doesn't work on windows [#239](https://github.com/brycedrennan/imaginAIry/issues/239)
**9.0.0**
- perf: cli now has minimal overhead such that `aimg --help` runs in ~650ms instead of ~3400ms
- feature: `edit` and `imagine` commands now accept multiple images (which they will process separately). This allows
batch editing of images as requested in [#229](https://github.com/brycedrennan/imaginAIry/issues/229)
- refactor: move `--surprise-me` to its own subcommand `edit-demo`
- feature: allow selection of output image format with `--output-file-extension`
- docs: make training fail on MPS platform with useful error message
- docs: add directions on how to change model cache path
**8.3.1**
- fix: init-image-strength type
**8.3.0**
- feature: create `gifs` or `mp4s` from any images made in a single run with `--compilation-anim gif`
- feature: create a series of images or edits by iterating over a parameter with the `--arg-schedule` argument
- feature: `openjourney-v1` and `openjourney-v2` models added. available via `--model openjourney-v2`
- feature: add upscale command line function: `aimg upscale`
- feature: `--gif` option will create a gif showing the generation process for a single image
- feature: `--compare-gif` option will create a comparison gif for any image edits
- fix: tile mode was broken since latest perf improvements
**8.2.0**
- feature: added `aimg system-info` command to help debug issues
**8.1.0**
- feature: some memory optimizations and documentation
- feature: surprise-me improvements
- feature: image sizes can now be multiples of 8 instead of 64. Inputs will be silently rounded down.
- feature: cleaned up `aimg` shell logs
- feature: auto-regen for unsafe images
- fix: make blip filename windows compatible
- fix: make captioning work with alpha pngs
**8.0.5**
- fix: bypass huggingface cache retrieval bug
**8.0.4**
- fix: limit attention slice size on MacOS machines with 64gb (#175)
**8.0.3**
- fix: use python 3.7 compatible lru_cache
- fix: use windows compatible filenames
**8.0.2**
- fix: hf_hub_download() got an unexpected keyword argument 'token'
**8.0.1**
- fix: spelling mistake of "surprise"
**8.0.0**
- feature: 🎉 edit images with instructions alone!
- feature: when editing an image add `--gif` to create a comparision gif
- feature: `aimg edit --surprise-me --gif my-image.jpg` for some fun pre-programmed edits
- feature: prune-ckpt command also removes the non-ema weights
**7.6.0**
- fix: default model config was broken
- feature: print version with `--version`
- feature: ability to load safetensors
- feature: 🎉 outpainting. Examples: `--outpaint up10,down300,left50,right50` or `--outpaint all100` or `--outpaint u100,d200,l300,r400`
**7.4.3**
- fix: handle old pytorch lightning imports with a graceful failure (fixes #161)
- fix: handle failed image generations better (fixes #83)
**7.4.2**
- fix: run face enhancement on GPU for 10x speedup
**7.4.1**
- fix: incorrect config files being used for non-1.0 models
**7.4.0**
- feature: 🎉 finetune your own image model. kind of like dreambooth. Read instructions on ["Concept Training"](docs/concept-training.md) page
- feature: image prep command. crops to face or other interesting parts of photo
- fix: back-compat for hf_hub_download
- feature: add prune-ckpt command
- feature: allow specification of model config file
**7.3.0**
- feature: 🎉 depth-based image-to-image generations (and inpainting)
- fix: k_euler_a produces more consistent images per seed (randomization respects the seed again)
**7.2.0**
- feature: 🎉 tile in a single dimension ("x" or "y"). This enables, with a bit of luck, generation of 360 VR images.
Try this for example: `imagine --tile-x -w 1024 -h 512 "360 degree equirectangular panorama photograph of the mountains" --upscale`
**7.1.1**
- fix: memory/speed regression introduced in 6.1.0
- fix: model switching now clears memory better, thus avoiding out of memory errors
**7.1.0**
- feature: 🎉 Stable Diffusion 2.1. Generated people are no longer (completely) distorted.
Use with `--model SD-2.1` or `--model SD-2.0-v`
**7.0.0**
- feature: negative prompting. `--negative-prompt` or `ImaginePrompt(..., negative_prompt="ugly, deformed, extra arms, etc")`
- feature: a default negative prompt is added to all generations. Images in SD-2.0 don't look bad anymore. Images in 1.5 look improved as well.
**6.1.2**
- fix: add back in memory-efficient algorithms
**6.1.1**
- feature: xformers will be used if available (for faster generation)
- fix: version metadata was broken
**6.1.0**
- feature: use different default steps and image sizes depending on sampler and model selceted
- fix: #110 use proper version in image metadata
- refactor: samplers all have their own class that inherits from ImageSampler
- feature: 🎉🎉🎉 Stable Diffusion 2.0
- `--model SD-2.0` to use (it makes worse images than 1.5 though...)
- Tested on macOS and Linux
- All samplers working for new 512x512 model
- New inpainting model working
- 768x768 model working for all samplers except PLMS (`--model SD-2.0-v `)
**5.1.0**
- feature: add progress image callback
**5.0.1**
- fix: support larger images on M1. Fixes #8
- fix: support CPU generation by disabling autocast on CPU. Fixes #81
**5.0.0**
- feature: 🎉 inpainting support using new inpainting model from RunwayML. It works really well! By default, the
inpainting model will automatically be used for any image-masking task
- feature: 🎉 new default sampler makes image generation more than twice as fast
- feature: added `DPM++ 2S a` and `DPM++ 2M` samplers.
- feature: improve progress image logging
- fix: fix bug with `--show-work`. fixes #84
- fix: add workaround for pytorch bug affecting macOS users using the new `DPM++ 2S a` and `DPM++ 2M` samplers.
- fix: add workaround for pytorch mps bug affecting `k_dpm_fast` sampler. fixes #75
- fix: larger image sizes now work on macOS. fixes #8
**4.1.0**
- feature: allow dynamic switching between models/weights `--model SD-1.5` or `--model SD-1.4` or `--model path/my-custom-weights.ckpt`)
- feature: log total progress when generating images (image X out of Y)
**4.0.0**
- feature: stable diffusion 1.5 (slightly improved image quality)
- feature: dilation and erosion of masks
Previously the `+` and `-` characters in a mask (example: `face{+0.1}`) added to the grayscale value of any masked areas. This wasn't very useful. The new behavior is that the mask will expand or contract by the number of pixel specified. The technical terms for this are dilation and erosion. This allows much greater control over the masked area.
- feature: update k-diffusion samplers. add k_dpm_adaptive and k_dpm_fast
- feature: img2img/inpainting supported on all samplers
- refactor: consolidates img2img/txt2img code. consolidates schedules. consolidates masking
- ci: minor logging improvements
**3.0.1**
- fix: k-samplers were broken
**3.0.0**
- feature: improved safety filter
**2.4.0**
- 🎉 feature: prompt expansion
- feature: make (blip) photo captions more descriptive
**2.3.1**
- fix: face fidelity default was broken
**2.3.0**
- feature: model weights file can be specified via `--model-weights-path` argument at the command line
- fix: set face fidelity default back to old value
- fix: handle small images without throwing exception. credit to @NiclasEriksen
- docs: add setuptools-rust as dependency for macos
**2.2.1**
- fix: init image is fully ignored if init-image-strength = 0
**2.2.0**
- feature: face enhancement fidelity is now configurable
**2.1.0**
- [improved masking accuracy from clipseg](https://github.com/timojl/clipseg/issues/8#issuecomment-1259150865)
**2.0.3**
- fix memory leak in face enhancer
- fix blurry inpainting
- fix for pillow compatibility
**2.0.0**
- 🎉 fix: inpainted areas correlate with surrounding image, even at 100% generation strength. Previously if the generation strength was high enough the generated image
would be uncorrelated to the rest of the surrounding image. It created terrible looking images.
- 🎉 feature: interactive prompt added. access by running `aimg`
- 🎉 feature: Specify advanced text based masks using boolean logic and strength modifiers. Mask descriptions must be lowercase. Keywords uppercase.
Valid symbols: `AND`, `OR`, `NOT`, `()`, and mask strength modifier `{+0.1}` where `+` can be any of `+ - * /`. Single character boolean operators also work (`|`, `&`, `!`)
- 🎉 feature: apply mask edits to original files with `mask_modify_original` (on by default)
- feature: auto-rotate images if exif data specifies to do so
- fix: mask boundaries are more accurate
- fix: accept mask images in command line
- fix: img2img algorithm was wrong and wouldn't at values close to 0 or 1
**1.6.2**
- fix: another bfloat16 fix
**1.6.1**
- fix: make sure image tensors come to the CPU as float32 so there aren't compatibility issues with non-bfloat16 cpus
**1.6.0**
- fix: *maybe* address #13 with `expected scalar type BFloat16 but found Float`
- at minimum one can specify `--precision full` now and that will probably fix the issue
- feature: tile mode can now be specified per-prompt
**1.5.3**
- fix: missing config file for describe feature
**1.5.1**
- img2img now supported with PLMS (instead of just DDIM)
- added image captioning feature `aimg describe dog.jpg` => `a brown dog sitting on grass`
- added new commandline tool `aimg` for additional image manipulation functionality
**1.4.0**
- support multiple additive targets for masking with `|` symbol. Example: "fruit|stem|fruit stem"
**1.3.0**
- added prompt based image editing. Example: "fruit => gold coins"
- test coverage improved
**1.2.0**
- allow urls as init-images
**previous**
- img2img actually does # of steps you specify
- performance optimizations
- numerous other changes
## Not Supported
- a GUI. this is a python library
- exploratory features that don't work well

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

View File

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 26 KiB

View File

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 106 KiB

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 82 KiB

After

Width:  |  Height:  |  Size: 82 KiB

View File

Before

Width:  |  Height:  |  Size: 378 KiB

After

Width:  |  Height:  |  Size: 378 KiB

View File

Before

Width:  |  Height:  |  Size: 286 KiB

After

Width:  |  Height:  |  Size: 286 KiB

View File

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 220 KiB

After

Width:  |  Height:  |  Size: 220 KiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

Before

Width:  |  Height:  |  Size: 2.7 MiB

After

Width:  |  Height:  |  Size: 2.7 MiB

View File

Before

Width:  |  Height:  |  Size: 36 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 125 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 39 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 20 KiB

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

View File

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 50 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 45 KiB

View File

Before

Width:  |  Height:  |  Size: 138 KiB

After

Width:  |  Height:  |  Size: 138 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 662 KiB

After

Width:  |  Height:  |  Size: 662 KiB

View File

Before

Width:  |  Height:  |  Size: 2.9 MiB

After

Width:  |  Height:  |  Size: 2.9 MiB

View File

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View File

Before

Width:  |  Height:  |  Size: 1.9 MiB

After

Width:  |  Height:  |  Size: 1.9 MiB

View File

Before

Width:  |  Height:  |  Size: 140 KiB

After

Width:  |  Height:  |  Size: 140 KiB

View File

Before

Width:  |  Height:  |  Size: 443 KiB

After

Width:  |  Height:  |  Size: 443 KiB

View File

Before

Width:  |  Height:  |  Size: 91 KiB

After

Width:  |  Height:  |  Size: 91 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 4.7 MiB

After

Width:  |  Height:  |  Size: 4.7 MiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 436 KiB

After

Width:  |  Height:  |  Size: 436 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

View File

Before

Width:  |  Height:  |  Size: 463 KiB

After

Width:  |  Height:  |  Size: 463 KiB

View File

Before

Width:  |  Height:  |  Size: 4.4 MiB

After

Width:  |  Height:  |  Size: 4.4 MiB

View File

Before

Width:  |  Height:  |  Size: 195 KiB

After

Width:  |  Height:  |  Size: 195 KiB

View File

Before

Width:  |  Height:  |  Size: 3.4 MiB

After

Width:  |  Height:  |  Size: 3.4 MiB

View File

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View File

Before

Width:  |  Height:  |  Size: 25 KiB

After

Width:  |  Height:  |  Size: 25 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 32 KiB

View File

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 44 KiB

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 311 KiB

After

Width:  |  Height:  |  Size: 311 KiB

View File

Before

Width:  |  Height:  |  Size: 327 KiB

After

Width:  |  Height:  |  Size: 327 KiB

View File

Before

Width:  |  Height:  |  Size: 309 KiB

After

Width:  |  Height:  |  Size: 309 KiB

View File

Before

Width:  |  Height:  |  Size: 302 KiB

After

Width:  |  Height:  |  Size: 302 KiB

View File

Before

Width:  |  Height:  |  Size: 3.3 MiB

After

Width:  |  Height:  |  Size: 3.3 MiB

View File

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

Before

Width:  |  Height:  |  Size: 4.7 MiB

After

Width:  |  Height:  |  Size: 4.7 MiB

View File

Before

Width:  |  Height:  |  Size: 252 KiB

After

Width:  |  Height:  |  Size: 252 KiB

BIN
docs/assets/pearl-gray.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 64 KiB

View File

Before

Width:  |  Height:  |  Size: 105 KiB

After

Width:  |  Height:  |  Size: 105 KiB

View File

Before

Width:  |  Height:  |  Size: 115 KiB

After

Width:  |  Height:  |  Size: 115 KiB

BIN
docs/assets/rocket-wide.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 482 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

Before

Width:  |  Height:  |  Size: 2.2 MiB

After

Width:  |  Height:  |  Size: 2.2 MiB

View File

Before

Width:  |  Height:  |  Size: 103 KiB

After

Width:  |  Height:  |  Size: 103 KiB

View File

Before

Width:  |  Height:  |  Size: 3.2 MiB

After

Width:  |  Height:  |  Size: 3.2 MiB

View File

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 50 KiB

BIN
docs/assets/svd-athens.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.5 MiB

BIN
docs/assets/svd-dog.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 867 KiB

Some files were not shown because too many files have changed in this diff Show More