Crashes everytime I try to run in my prompt
Error completing request
Arguments: (0, '1girl, water drop, water, AnimeScreenCap, solo, ribbon, frills, skirt, shirt, bangs, glass, dress, bow, gradient, shards, buttons, vest, long hair, black hair, looking at viewer, short sleeves, red ribbon, parted lips, neck ribbon, puffy sleeves, white shirt, grey eyes, white skirt, hair intakes, outstretched arms, puffy short sleeves, center frills, white dress, grey background, upper body, simple background, gradient background, black eyes, collared shirt, black skirt, blue eyes, closed mouth, cowboy shot, purple eyes, frilled shirt, pleated skirt, black background, black vest, frilled skirt, purple hair,', 'painting by bad-artist, painting by bad-artist-anime, ', 'None', 'None', <PIL.Image.Image image mode=RGB size=1024x1536 at 0x1BF84ACA770>, {'image': <PIL.Image.Image image mode=RGBA size=1024x1536 at 0x1BF84ACB970>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1024x1536 at 0x1BF84AC9B10>}, None, None, None, 0, 50, 15, 0, 0, 1, False, False, 1, 1, 7, 0.6, -1.0, -1.0, 0, 0, 0, False, 1536, 1024, 3, False, 32, 0, '', '', 0, '
- \n
CFG Scale
should be 2 or lower. \n
Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', 0, '', True, False, False) {}Traceback (most recent call last):
File "D:\A111\stable-diffusion-webui\modules\call_queue.py", line 45, in f
res = list(func(*args, **kwargs))
File "D:\A111\stable-diffusion-webui\modules\call_queue.py", line 28, in f
res = func(*args, **kwargs)
File "D:\A111\stable-diffusion-webui\modules\img2img.py", line 152, in img2img
processed = process_images(p)
File "D:\A111\stable-diffusion-webui\modules\processing.py", line 464, in process_images
res = process_images_inner(p)
File "D:\A111\stable-diffusion-webui\modules\processing.py", line 557, in process_images_inner
c = prompt_parser.get_multicond_learned_conditioning(shared.sd_model, prompts, p.steps)
File "D:\A111\stable-diffusion-webui\modules\prompt_parser.py", line 203, in get_multicond_learned_conditioning
learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps)
File "D:\A111\stable-diffusion-webui\modules\prompt_parser.py", line 138, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "D:\A111\stable-diffusion-webui\scripts\v2.py", line 36, in get_learned_conditioning_with_prior
cond = ldm.models.diffusion.ddpm.LatentDiffusion.get_learned_conditioning_original(self, c)
File "D:\A111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\A111\stable-diffusion-webui\modules\sd_hijack_clip.py", line 219, in forward
z1 = self.process_tokens(tokens, multipliers)
File "D:\A111\stable-diffusion-webui\modules\sd_hijack_clip.py", line 240, in process_tokens
z = self.encode_with_transformers(tokens)
File "D:\A111\stable-diffusion-webui\modules\sd_hijack_clip.py", line 286, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl
result = forward_call(*input, **kwargs)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
return self.text_model(
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 708, in forward
hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 223, in forward
inputs_embeds = self.token_embedding(input_ids)
File "D:\A111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\A111\stable-diffusion-webui\modules\sd_hijack.py", line 156, in forward
tensor = torch.cat([tensor[0:offset + 1], emb[0:emb_len], tensor[offset + 1 + emb_len:]])
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 768 but got size 1024 for tensor number 1 in the list.
Make sure to check you have the 768-v2.1-ema.ckpt, or any other SD2.0 model active when using the embedding. They won't work with 1.5 or 1.4
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 768 but got size 1024 for tensor number 1 in the list.
This message is an indication that an wrong model is being used for the embedding.
im using this with AnythingV3 and My own model it just won't work with anime models?
Embeddings work on the model they are associated with.
"Textual Inversion Embedding by ConflictX For SD 2.x trained on 768x768 images from anime sources."
I see. Thanks for the answers