errors converting local .ckpt
M1 Mac Air, 8gb, os 13.1, xcode 14.2
CPU and NE
Two different errors from two different models
- 1 (dreamLikeSamKuvshino_ckpt.ckpt)
An error occurred during conversion
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 897, in main
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 650, in load_from_ckpt
KeyError: 'state_dict'
-2 (v1-5-pruned-emaonly.ckpt)
An error occurred during conversion
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 897, in main
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 649, in load_from_ckpt
File "torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "torch/serialization.py", line 1124, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'pytorch_lightning'
Hey! Thanks for the detailed feedback I just uploaded a new version of the converter that hopefully fixes both those problems. Could you check it?
Ok i will redownload and try again.
Another issue though...
with 8gb macs, Sometimes conversion fails
With python scrips can convert each component one at a time, not together
egpython -m ....... --convert-vae-decoder --model-version .... --convert-unet ..... --model-version ....
Is this something you can do from your end ?
see this link
https://github.com/godly-devotion/mochi-diffusion/discussions/43
Yes, when using the Guernika Model Converter you can select a single module at a time and it will be the same as with the python scrtipt.
I'm getting the following error when trying to convert a 1.5 model checkpoint with Guernika Model Converter:
An error occurred during conversion
Traceback (most recent call last):
File "shutil.py", line 791, in move
FileNotFoundError: [Errno 2] No such file or directory: '/Users/michaelangelo/Desktop/Astria SKS man (v1.5)/Stable_Diffusion_version_Astria SKS man (v1.5)_text_encoder.mlmodelc' -> '/Users/michaelangelo/Desktop/Astria SKS man (v1.5)/TextEncoder.mlmodelc'During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 932, in main
File "python_coreml_stable_diffusion/torch2coreml.py", line 205, in bundle_resources_for_guernika
File "python_coreml_stable_diffusion/torch2coreml.py", line 180, in _compile_coreml_model
File "shutil.py", line 811, in move
File "shutil.py", line 435, in copy2
File "shutil.py", line 264, in copyfile
FileNotFoundError: [Errno 2] No such file or directory: '/Users/michaelangelo/Desktop/Astria SKS man (v1.5)/Stable_Diffusion_version_Astria SKS man (v1.5)_text_encoder.mlmodelc'
where Desktop was designated as the Save To: directory for Astria SKS man.ckpt β¦
@Michaelangelo I'm not sure why you got that error, were you able to convert it or are you getting the same error every time? were you able to convert any other models?
Same every time. No other models, latest error with another model:
An error occurred during conversion
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 897, in main
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 650, in load_from_ckpt
KeyError: 'state_dict'
when trying to convert local 1.5 checkpoint and save to Desktop ...
and here's the error from trying to convert 1.4 official checkpoint:
An error occurred during conversion
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 897, in main
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 649, in load_from_ckpt
File "torch/serialization.py", line 789, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "torch/serialization.py", line 1131, in _load
result = unpickler.load()
File "torch/serialization.py", line 1124, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'pytorch_lightning'
β Pytorch 1.13.1 is already installed and up-to-date.
Latest instance Guernika model converter triggered an out of memory error and was taking up 160 GB of data before restarting β¦
@Michaelangelo
160GB of disk space? I've seen it take tens of GBs but that is way too much, were you trying to convert one of the official models?
The model converter is actually just a wrapper of the python scripts, if you want to test with better output you can right click the app "Show Package Contents" then go to Contents -> MacOS and double click the "Guernika Model Converter" there, it should launch with a terminal window showing all the output of the python scripts, could be helpful.
@GuiyeC β Thanks for the tips. I've tried converting several official models as well as custom models, such as Analog Diffusion.
Attached are the errors resulting from attempted conversion of official release versions for base Model 1.4 and the official release for base Model 1.5.
With Protogen-v5.3, the conversion script returns an empty output folder where the models should have been saved, accompanied by the following error report on exit following ten to fifteen minutes of active processing:
An error occurred during conversion
Traceback (most recent call last):
File "shutil.py", line 791, in move
FileNotFoundError: [Errno 2] No such file or directory: '/Users/michaelangelo/Downloads/Converted ORIGINAL/Protogen-v5.3-Photorealism/Stable_Diffusion_version_Protogen-v5.3-Photorealism_text_encoder.mlmodelc' -> '/Users/michaelangelo/Downloads/Converted ORIGINAL/Protogen-v5.3-Photorealism/TextEncoder.mlmodelc'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 932, in main
File "python_coreml_stable_diffusion/torch2coreml.py", line 205, in bundle_resources_for_guernika
File "python_coreml_stable_diffusion/torch2coreml.py", line 180, in _compile_coreml_model
File "shutil.py", line 811, in move
File "shutil.py", line 435, in copy2
File "shutil.py", line 264, in copyfile
FileNotFoundError: [Errno 2] No such file or directory: '/Users/michaelangelo/Downloads/Converted ORIGINAL/Protogen-v5.3-Photorealism/Stable_Diffusion_version_Protogen-v5.3-Photorealism_text_encoder.mlmodelc'
When trying to convert the official release for the base 2.0 model (512px), I receive the following error straight away:
Traceback (most recent call last):
File "python_coreml_stable_diffusion/torch2coreml_ui.py", line 254, in convert_model
File "python_coreml_stable_diffusion/torch2coreml.py", line 897, in main
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 692, in load_from_ckpt
File "python_coreml_stable_diffusion/convert_original_stable_diffusion_to_diffusers.py", line 245, in create_unet_diffusers_config
File "omegaconf/dictconfig.py", line 355, in __getattr__
self._format_and_raise(
File "omegaconf/base.py", line 231, in _format_and_raise
format_and_raise(
File "omegaconf/_utils.py", line 899, in format_and_raise
_raise(ex, cause)
File "omegaconf/_utils.py", line 797, in _raise
raise ex.with_traceback(sys.exc_info()[2]) # set env var OC_CAUSE=1 for full trace
File "omegaconf/dictconfig.py", line 351, in __getattr__
return self._get_impl(
File "omegaconf/dictconfig.py", line 442, in _get_impl
node = self._get_child(
File "omegaconf/basecontainer.py", line 73, in _get_child
child = self._get_node(
File "omegaconf/dictconfig.py", line 480, in _get_node
raise ConfigKeyError(f"Missing key {key!s}")
omegaconf.errors.ConfigAttributeError: Missing key num_heads
full_key: model.params.unet_config.params.num_heads
object_type=dict
Right-clicking to Show Package Contents then Contents -> MacOS and Guernika Model Converter launches a terminal window which outputs a continual stream of progress info, the last page of which returns the following error(s) on scrollback:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: DetectionDetailerScript.ui.<locals>.<lambda>() takes 1 positional argument but 2 were given
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/gradio/blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
TypeError: DetectionDetailerScript.ui.<locals>.<lambda>() takes 1 positional argument but 2 were given`
I'd certainly appreciate any potential insight through your perspective and experience. Many thanks!
@Michaelangelo hey! could you share the exact links to get the models so I can give it a try converting them?