Doesn't work in Forge :-(
StateDict Keys: {'transformer': 776, 'vae': 244, 'text_encoder': 196, 'text_encoder_2': 221, 'ignore': 0}
Using Detected T5 Data Type: gguf
Using pre-quant state dict!
GGUF state dict: {'Q6_K': 169, 'Q8_0': 1}
Traceback (most recent call last):
File "D:\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 30, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\stable-diffusion-webui-forge\modules\txt2img.py", line 131, in txt2img_function
processed = processing.process_images(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 836, in process_images
manage_model_and_prompt_cache(p)
File "D:\stable-diffusion-webui-forge\modules\processing.py", line 804, in manage_model_and_prompt_cache
p.sd_model, just_reloaded = forge_model_reload()
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\modules\sd_models.py", line 504, in forge_model_reload
sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\stable-diffusion-webui-forge\backend\loader.py", line 520, in forge_loader
component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd)
File "D:\stable-diffusion-webui-forge\backend\loader.py", line 109, in load_huggingface_component
load_state_dict(model, state_dict, log_name=cls_name, ignore_errors=['transformer.encoder.embed_tokens.weight', 'logit_scale'])
File "D:\stable-diffusion-webui-forge\backend\state_dict.py", line 5, in load_state_dict
missing, unexpected = model.load_state_dict(sd, strict=False)
File "D:\stable-diffusion-webui-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 2581, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for IntegratedT5:
While copying the parameter named "transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight", whose dimensions in the model are torch.Size([32, 64]) and whose dimensions in the checkpoint are torch.Size([32, 68]), an exception occurred : ('The size of tensor a (64) must match the size of tensor b (68) at non-singleton dimension 1',).
While copying the parameter named "transformer.shared.weight", whose dimensions in the model are torch.Size([32128, 4096]) and whose dimensions in the checkpoint are torch.Size([32128, 3360]), an exception occurred : ('The size of tensor a (4096) must match the size of tensor b (3360) at non-singleton dimension 1',).
Error(s) in loading state_dict for IntegratedT5:
While copying the parameter named "transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight", whose dimensions in the model are torch.Size([32, 64]) and whose dimensions in the checkpoint are torch.Size([32, 68]), an exception occurred : ('The size of tensor a (64) must match the size of tensor b (68) at non-singleton dimension 1',).
While copying the parameter named "transformer.shared.weight", whose dimensions in the model are torch.Size([32128, 4096]) and whose dimensions in the checkpoint are torch.Size([32128, 3360]), an exception occurred : ('The size of tensor a (4096) must match the size of tensor b (3360) at non-singleton dimension 1',).
I’ve confirmed the issue. It seems that some modifications are needed to make it compatible with Stable Diffusion WebUI Forge, and while it might take some time, I’ll attempt to address it. At this stage, ComfyUI supports all formats, whereas Stable Diffusion WebUI Forge only supports the FP16 format.
I’ve uploaded a modified version. Please give it a try.
flan_t5_xxl_TE-only_Q6_K.gguf