Size mismatch when trying to load
#1
by
huggingface9837
- opened
I've tried using the instructions in the model card to load this model, as well as tried a few variations myself, but I can't get the model to load:
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-2-1-base', torch_dtype=torch.float16).to('cuda')
#pipeline = AutoPipelineForText2Image.from_pretrained('artificialguybr/freedom', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights(
'artificialguybr/3d-redmond-2-1v-3d-render-style-for-freedom-redmond-sd-2-1',
weight_name='3DRedmond21V-FreedomRedmond-3DRenderStyle-3DRenderAF.safetensors')
And
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
pipe = DiffusionPipeline.from_pretrained(
"artificialguybr/freedom",
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights(
'artificialguybr/3d-redmond-2-1v-3d-render-style-for-freedom-redmond-sd-2-1',
weight_name='3DRedmond21V-FreedomRedmond-3DRenderStyle-3DRenderAF.safetensors')
_ = pipe.to("cuda")
Both give the same error:
RuntimeError: Error(s) in loading state_dict for UNet2DConditionModel:
size mismatch for down_blocks.0.attentions.0.proj_in.lora_A.default_0.weight: copying a param with shape torch.Size([128, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 320]).
size mismatch for down_blocks.0.attentions.0.proj_in.lora_B.default_0.weight: copying a param with shape torch.Size([320, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 128]).
size mismatch for down_blocks.0.attentions.0.proj_out.lora_A.default_0.weight: copying a param with shape torch.Size([128, 320, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 320]).
size mismatch for down_blocks.0.attentions.0.proj_out.lora_B.default_0.weight: copying a param with shape torch.Size([320, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([320, 128]).
# ...
Any idea how to fix this?