OSError: does not appear to have model_index.json
Trying to load up this model using diffusers but I get this error, seems like a missing folder from the repo?
OSError: stabilityai/sd-turbo does not appear to have a file named model_index.json.
You need to use the from_single_file
, not from_pretrained
to load all in one models these days.
Having said then its seems the model is broke anyway, when you try and move it to a device
Traceback (most recent call last):
File "/Volumes/SSD2TB/AI/Diffusers/sdt.py", line 20, in <module>
"/Volumes/SSD2TB/AI/diffusers/models/sd_turbo.safetensors" ).to('mps')
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 864, in to
module.to(device, dtype)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2271, in to
return super().to(*args, **kwargs)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1160, in to
return self._apply(convert)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 810, in _apply
module._apply(fn)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 833, in _apply
param_applied = fn(param)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
Same final error if you try and run it on the CPU
Traceback (most recent call last):
File "/Volumes/SSD2TB/AI/Diffusers/sdt.py", line 27, in <module>
image = pipe(prompt=prompt, negative_prompt=negative_prompt,
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 855, in __call__
prompt_embeds, negative_prompt_embeds = self.encode_prompt(
File "/Volumes/SSD2TB/AI/Diffusers/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 425, in encode_prompt
prompt_embeds = prompt_embeds.to(dtype=prompt_embeds_dtype, device=device)
NotImplementedError: Cannot copy out of meta tensor; no data!
same error here. NotImplementedError: Cannot copy out of meta tensor; no data!
Working on the conversion now - should be up in 1h
Should work now
Still broken it seems,
Creating model from config: D:\SDdev\repositories\generative-models\configs\inference\sd_xl_refiner.yaml
changing setting sd_model_checkpoint to turbo.safetensors [6b33199dfa]: RuntimeError
Nvm a1111 update required