Script error

#2
by wizbe - opened

test.py, the same as in the introduction:

!pip install git+https://github.com/huggingface/accelerate

from diffusers import StableDiffusionPipeline
import torch
torch.backends.cudnn.benchmark = True
pipe = StableDiffusionPipeline.from_pretrained("IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-EN-v0.1", torch_dtype=torch.float16)
pipe.to('cuda')

prompt = '小桥流水人家,Van Gogh style'
image = pipe(prompt, guidance_scale=10.0).images[0]
image.save("小桥.png")

Running from a python enviroment that runs Taiyi webui, but here is the result:

Fetching 22 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:00<00:00, 24289.20it/s]
Traceback (most recent call last):
File "/home/liyong/t.py", line 9, in
image = pipe(prompt, guidance_scale=10.0).images[0]
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 182, in call
text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0]
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 722, in forward
return self.text_model(
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 643, in forward
encoder_outputs = self.encoder(
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 574, in forward
layer_outputs = encoder_layer(
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 317, in forward
hidden_states, attn_weights = self.self_attn(
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/liyong/taiyi-stable-diffusion-webui/venv/lib64/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 257, in forward
attn_output = torch.bmm(attn_probs, value_states)
RuntimeError: expected scalar type Half but found Float

What's wrong?

Fengshenbang-LM org

Maybe you can try to upgrade your environment.
Here is some version suggestion:
transformers: 4.24.0
diffusers: 0.7.2

Sign up or log in to comment