## Prompts not working on 2-1 models #25

I've tried several times to train some models of my own face with the updated SD 2.1 model that's now included in this. But yet when I use my prompts as instructed I don't get anything at all like what I look like. It seems to be ignoring them completely..

Are there some changes to the way prompts work in 2.1?

I'm testing them in Automatic1111s webui btw.

Thanks for the help.

It didn't work on my side with 2.1 training as well (running locally on windows), I got a error like this when loading ckpt file in auto1111's webui, results in mismatch size error such as:

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).

It didn't work on my side with 2.1 training as well (running locally on windows), I got a error like this when loading ckpt file in auto1111's webui, results in mismatch size error such as:

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]).

Did you copy .yaml file with the same name as the model.ckpt?

It didn't work on my side with 2.1 training as well (running locally on windows), I got a error like this when loading ckpt file in auto1111's webui, results in mismatch size error such as:

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight: copying a param with shape torch.Size([1024, 1024]) from checkpoint, the shape in current model is torch.Size([768, 768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([768]).

size mismatch for cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 768]).Did you copy .yaml file with the same name as the model.ckpt?

OMG I forgot to put it. You are right!

Additionally, I commented out the bitsandbytes part in app.py for running locally on Windows since it may not support win os yet.

OK after this error solved, I got the same issue as OP.

It seems like the unique concept prompt keyword didn't affect at all in the trained customized v2.1 model (tried twice on remote huggingface space and local)..

I tried to train another model with sd1.5 in same setting and it works fine

Just out of curiosity, how many pictures did you use for training and how many steps?

Just out of curiosity, how many pictures did you use for training and how many steps?

For my case I used 27 images with 2700 steps + SD2.1-768 model

Hi @Cossette , @DGSpitzer and others: the `convert_diffusers_to_original_stable_diffusion.py`

this Space used was not fully compatible with 2.x models. The script was updated Today for full compatibility and now should be working.

So it may be the case your models did learn the concepts - but that was only working with the diffusers library. You can test if the token works in the Inference Widget at your model page here in Hugging Face. If it works there but ended up borked as a`*.ckpt`

for AUTO1111, it may mean it indeed had a bad conversion, so you can convert it again now with:

- The new convert_diffusers_to_original_stable_diffusion.py script
- Or this Colab that I just made quickly https://colab.research.google.com/drive/1jmUP5ZhM4M63l9UFoiAVgUyjfudO9lmK for you