I'm trying to get this running on my M1, using the Automatic1111 extention.

#7
by swankwc - opened

Has anyone get this running on M1?

I get the following trying to run it.

Calculating sha256 for /Users/wesley/Documents/stable-diffusion-webui/models/Stable-diffusion/instruct-pix2pix-00-22000.ckpt: ffd280ddcfc8234e4d28b93641cb83169cebcb4d70998df9ee2eabb4d705374a
Loading weights [ffd280ddcf] from /Users/wesley/Documents/stable-diffusion-webui/models/Stable-diffusion/instruct-pix2pix-00-22000.ckpt
Creating model from config: /Users/wesley/Documents/stable-diffusion-webui/configs/instruct-pix2pix.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.53 M params.
Keeping EMAs of 688.
Applying cross attention optimization (InvokeAI).
Textual inversion embeddings loaded(0):
Model loaded in 58.0s (create model: 2.9s, apply weights to model: 16.1s, apply half(): 20.4s, move model to device: 18.4s, hijack: 0.1s).
Processing 1 image(s)
Traceback (most recent call last):
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1015, in process_api
result = await self.call_function(
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/Users/wesley/Documents/stable-diffusion-webui/extensions/stable-diffusion-webui-instruct-pix2pix/scripts/instruct-pix2pix.py", line 128, in generate
model.eval().cuda()
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 128, in cuda
device = torch.device("cuda", torch.cuda.current_device())
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 482, in current_device
_lazy_init()
File "/Users/wesley/Documents/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Sign up or log in to comment