feature: add "edit" control mode

Edit images using text instructions with any SD 1.5 based model. Similar to instructPix2Pix.
pull/316/head
Bryce 1 year ago committed by Bryce Drennan
parent 476a81a967
commit 9526a85a71

@ -104,6 +104,19 @@ The middle image is the "shuffled" input image
<img src="assets/pearl_shuffle_clown_019331_1_kdpmpp2m15_PS7.5_img2img-0.0_a_clown.jpg" height="256">
</p>
**Edit Instructions Control**
Similar to instructPix2Pix (below) but works with any SD 1.5 based model.
```bash
imagine --control-image pearl-girl.jpg --control-mode edit --init-image-strength 0.01 --steps 30 --negative-prompt "" --model openjourney-v2 "make it anime" "make it at the beach"
```
<p float="left">
<img src="assets/girl_with_a_pearl_earring.jpg" height="256">
<img src="assets/pearl_anime_019537_521829407_kdpmpp2m30_PS9.0_img2img-0.01_make_it_anime.jpg" height="256">
<img src="assets/pearl_beach_019561_862735879_kdpmpp2m30_PS7.0_img2img-0.01_make_it_at_the_beach.jpg" height="256">
</p>
### Instruction based image edits [by InstructPix2Pix](https://github.com/timothybrooks/instruct-pix2pix)
Just tell imaginairy how to edit the image and it will do it for you!
@ -411,8 +424,8 @@ docker run -it --gpus all -v $HOME/.cache/huggingface:/root/.cache/huggingface -
## ChangeLog
- 🎉 feature: add "shuffle" control mode. Image is generated from elements of control image. similar to style transfer
- 🎉 feature: add "edit" control mode. Edit images using text instructions with any SD 1.5 based model. Similar to instructPix2Pix.
- 🎉 feature: add "shuffle" control mode. Image is generated from elements of control image. Similar to style transfer.
- 🎉 feature: upgrade to [controlnet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly)
- 🎉 fix: controlnet now works with all SD 1.5 based models
- fix: raw control images are now properly loaded. fixes #296

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

@ -40,7 +40,9 @@ from imaginairy.cli.shared import (
"--control-mode",
default=None,
show_default=False,
type=click.Choice(["", "canny", "depth", "normal", "hed", "openpose", "shuffle"]),
type=click.Choice(
["", "canny", "depth", "normal", "hed", "openpose", "shuffle", "edit"]
),
help="how the control image is used as signal",
)
@click.pass_context

@ -205,6 +205,14 @@ CONTROLNET_CONFIGS = [
weights_url="https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/69fc48b9cbd98661f6d0288dc59b59a5ccb32a6b/control_v11e_sd15_shuffle.pth",
alias="shuffle",
),
# "instruct pix2pix"
ControlNetConfig(
short_name="edit15",
control_type="edit",
config_path="configs/control-net-v15.yaml",
weights_url="https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/69fc48b9cbd98661f6d0288dc59b59a5ccb32a6b/control_v11e_sd15_ip2p.pth",
alias="edit",
),
]
CONTROLNET_CONFIG_SHORTCUTS = {m.short_name: m for m in CONTROLNET_CONFIGS}

@ -185,7 +185,7 @@ def shuffle_map_torch(tensor, h=None, w=None, f=256):
def noop(img):
return img
return (img + 1.0) / 2.0
CONTROL_MODES = {
@ -197,4 +197,5 @@ CONTROL_MODES = {
"openpose": create_pose_map,
# "scribble": None,
"shuffle": shuffle_map_torch,
"edit": noop,
}

Loading…
Cancel
Save