ValueError: Cannot load E:\huggingface\stable-cascade\decoder because embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1])

#24
by Taonb666 - opened

decoder = StableCascadeDecoderPipeline.from_pretrained("E:\huggingface\stable-cascade", torch_dtype=torch.float16).to("cuda")

Traceback (most recent call last):
File "D:\PycharmProjects\AzkabanServer\models\stable_cascade.py", line 8, in
decoder = StableCascadeDecoderPipeline.from_pretrained("E:\huggingface\stable-cascade", torch_dtype=torch.float16).to(device)
File "D:\Python310\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "D:\Python310\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1263, in from_pretrained
loaded_sub_model = load_sub_model(
File "D:\Python310\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 531, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "D:\Python310\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "D:\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 669, in from_pretrained
unexpected_keys = load_model_dict_into_meta(
File "D:\Python310\lib\site-packages\diffusers\models\modeling_utils.py", line 154, in load_model_dict_into_meta
raise ValueError(
ValueError: Cannot load E:\huggingface\stable-cascade\decoder because embedding.1.weight expected shape tensor(..., device='meta', size=(320, 64, 1, 1)), but got torch.Size([320, 16, 1, 1]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

I solved the problem by the following method, open "stable-cascade\decoder\config.json" and change "c_in" to "in_channels"

same error

What needs to be changed in the main code after making changes to the json file?
decoder = StableCascadeDecoderPipeline.from_pretrained("stabilityai/stable-cascade", torch_dtype=torch.float16).to(device)

How does the following line of code reference to the JSON file? How to incorporate these changes into the model?
I'm running the code through a diffuser.

Where is decoder folder? it's not present!
image.png

Where is decoder folder? it's not present!
image.png

The config.json is located --> .cache/huggingface/hub/models--stabilityai--stable-cascade/snapshots/e3aee2fd11a00865f5c085d3e741f2e51aef12d3/decoder
"e3aee2fd11a00865f5c085d3e741f2e51aef12d3" might be different for you

in the config.json modify "c_in" to "in_channels" as mentioned by @Taonb666

Sign up or log in to comment