runtime error
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] /usr/local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s][A Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00, 15.01it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 13, in <module> pipeline.unet.load_attn_procs("gr33nr1ng3r/Mukh-Oboyob") File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/unet.py", line 297, in load_attn_procs raise ValueError(f"Module {key} is not a LoRACompatibleConv or LoRACompatibleLinear module.") ValueError: Module down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_k is not a LoRACompatibleConv or LoRACompatibleLinear module.
Container logs:
Fetching error logs...