I encountered a problem in the operation, I do not understand much in these matters, please help me to operate it

#12
by nio2019 - opened

File "D:\AI\stable-diffusion-webui-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ControlNet:
size mismatch for input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).

It's one of this case:

  • You use a 1.5 model with a 2.1 controlnet (=> change the model/controlnet)
  • You use a 2.1 model with a 1.5 controlnet (=> change the model/controlnet)
  • You use a a 2.1 controlnet with a 1.5 yaml file (=> change the yaml file in settings/controlnet - see my README)
  • You use a a 1.5 controlnet with a 2.1 yaml file (=> change the yaml file in settings/controlnet - see my README)
thibaud changed discussion status to closed

Inside the file I have 2 files cldm_v15.yaml + cldm_v21.yaml and I deleted the 15th and left the 21st and the same problem continues I returned the 15th and changed it to 21 as I explained and the same problem please help me and sorry if it bothered you but I really want to try it

https://prnt.sc/3-xKSK7di-Kr
The pictures of the files that I have may help you in solving my problem, thank you very much

I deleted the cldm_v15.yaml file and left cldm v21.yaml and I had this problem

Loading model: canny-sd21 [64de50ad]
Loaded state_dict from [D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\models\canny-sd21.ckpt]
Error running process: D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui-master\modules\scripts.py", line 386, in process
script.process(p, *script_args)
File "D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\controlnet.py", line 735, in process
model_net = self.load_control_model(p, unet, model, lowvram)
File "D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\controlnet.py", line 534, in load_control_model
model_net = self.build_control_model(p, unet, model, lowvram)
File "D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\controlnet.py", line 572, in build_control_model
network = network_module(
File "D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\scripts\cldm.py", line 63, in init
config = OmegaConf.load(config_path)
File "D:\AI\stable-diffusion-webui-master\venv\lib\site-packages\omegaconf\omegaconf.py", line 187, in load
with io.open(os.path.abspath(file_), "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\AI\stable-diffusion-webui-master\extensions\sd-webui-controlnet\models\cldm_v15.yaml'

After changing the settings and downloading deleted files, Control Net and changing the settings of Control Net, I also encountered the same problem.
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (154x768 and 1024x320)

Sign up or log in to comment