Model conversion keeps failing with Guernika Model Converter 5.0.2
#24
by
akaMarukyu
- opened
Hello,
I just updated to the latest version of Guernika and am trying to convert some models but it fails every time. I have tried a few different models and they all fail with the same error. I have tried using CPU & NE and CPU & GPU but they both fail to convert.
I have converted models successfully a few months ago with the older version.
I've attached a screenshot of the options I used and the error log.
Thanks for any help.
Starting python converter
Initializing StableDiffusionPipeline from /Users/[MY_NAME]/Downloads/meinamix_meinaV11.safetensors..
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.
warnings.warn(
`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden.
Done.
Output size will be 512x512
Converting vae_encoder
`vae_encoder` already exists at /var/folders/j0/c81y2q2j6vx2vr4rdr1_p_dh0000gn/T/meinamix_meinaV11_vae_encoder.mlpackage, skipping conversion.
Converted vae_encoder
Converting vae_decoder
`vae_decoder` already exists at /var/folders/j0/c81y2q2j6vx2vr4rdr1_p_dh0000gn/T/meinamix_meinaV11_vae_decoder.mlpackage, skipping conversion.
Converted vae_decoder
Converting unet
AttentionImplementations.SPLIT_EINSUM
(torch.Size([2, 1280, 8, 8]), torch.float32)}
JIT tracing..
guernikatools/layer_norm.py:61: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 1682, in <module>
File "guernikatools/torch2coreml.py", line 1495, in main
File "guernikatools/torch2coreml.py", line 1072, in convert_unet
File "torch/jit/_trace.py", line 794, in trace
return trace_module(
File "torch/jit/_trace.py", line 1056, in trace_module
module._c._create_method_from_trace(
File "torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "guernikatools/unet.py", line 1003, in forward
RuntimeError: The size of tensor a (64) must match the size of tensor b (32) at non-singleton dimension 3
[31812] Failed to execute script 'torch2coreml' due to unhandled exception: The size of tensor a (64) must match the size of tensor b (32) at non-singleton dimension 3
[31812] Traceback:
Traceback (most recent call last):
File "guernikatools/torch2coreml.py", line 1682, in <module>
File "guernikatools/torch2coreml.py", line 1495, in main
File "guernikatools/torch2coreml.py", line 1072, in convert_unet
File "torch/jit/_trace.py", line 794, in trace
return trace_module(
File "torch/jit/_trace.py", line 1056, in trace_module
module._c._create_method_from_trace(
File "torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "torch/nn/modules/module.py", line 1488, in _slow_forward
result = self.forward(*input, **kwargs)
File "guernikatools/unet.py", line 1003, in forward
RuntimeError: The size of tensor a (64) must match the size of tensor b (32) at non-singleton dimension 3
@akaMarukyu thanks for the detailed report! I think this should be fixed in the latest update 5.0.3, let me know if you still have problems :)
Thanks, it's working great now.
GuiyeC
changed discussion status to
closed