fix: package will not try to install xformers on `aarch64` machines.

While this will allow the dockerfile to build on MacOS M1, [torch will not be able to use the M1 when generating images.](https://github.com/pytorch/pytorch/issues/81224#issuecomment-1499741152)
pull/335/head
Bryce 11 months ago committed by Bryce Drennan
parent 14739bc90b
commit 2a26495653

@ -18,4 +18,6 @@ coverage.xml
.mypy_cache
.pytest_cache
.hypothesis
.DS_Store
.DS_Store
other
outputs

@ -486,13 +486,15 @@ A: The AI models are cached in `~/.cache/` (or `HUGGINGFACE_HUB_CACHE`). To dele
**13.0.0**
- 🎉 feature: multi-controlnet support. pass in multiple `--control-mode`, `--control-image`, and `--control-image-raw` arguments.
- 🎉 feature: "better" memory management. If GPU is full, least-recently-used model is moved to RAM.
- alpha feature: `aimg run-api-server` command. Runs a http webserver (not finished). After running, visit http://127.0.0.1:8000/docs for api.
- feature: add colorization controlnet. improve `aimg colorize` command
- 🎉 feature: add colorization controlnet. improve `aimg colorize` command
- 🧪 alpha feature: `aimg run-api-server` command. Runs a http webserver (not finished). After running, visit http://127.0.0.1:8000/docs for api.
- feature: [disabled] inpainting controlnet can be used instead of finetuned inpainting model
- The inpainting controlnet doesn't work as well as the finetuned model
- feature: python interface allows configuration of controlnet strength
- fix: hide the "triton" error messages
- feature: show full stack trace on error in cli
- fix: hide the "triton" error messages
- fix: package will not try to install xformers on `aarch64` machines. While this will allow the dockerfile to build on
MacOS M1, [torch will not be able to use the M1 when generating images.](https://github.com/pytorch/pytorch/issues/81224#issuecomment-1499741152)
**12.0.3**
- fix: exclude broken versions of timm as dependencies

@ -5,7 +5,7 @@ from typing import Optional
from fastapi import FastAPI, Query
from fastapi.concurrency import run_in_threadpool
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
from pydantic import BaseModel # noqa
from imaginairy import ImaginePrompt, imagine
from imaginairy.log_utils import configure_logging

@ -21,7 +21,14 @@ else:
@lru_cache()
def get_git_revision_hash() -> str:
return subprocess.check_output(["git", "rev-parse", "HEAD"]).decode("ascii").strip()
try:
return (
subprocess.check_output(["git", "rev-parse", "HEAD"])
.decode("ascii")
.strip()
)
except FileNotFoundError:
return "no-git"
revision_hash = get_git_revision_hash()
@ -94,6 +101,6 @@ setup(
"torchvision>=0.13.1",
"kornia>=0.6",
"uvicorn",
"xformers>=0.0.16; sys_platform!='darwin'",
"xformers>=0.0.16; sys_platform!='darwin' and platform_machine!='aarch64'",
],
)

Loading…
Cancel
Save