runtime error
Exit code: 1. Reason: s: 26%|βββ | 1.12G/4.25G [00:03<00:07, 424MB/s][A model-00007-of-00007.safetensors: 39%|ββββ | 1.66G/4.25G [00:04<00:05, 463MB/s][A model-00007-of-00007.safetensors: 51%|βββββ | 2.17G/4.25G [00:05<00:04, 480MB/s][A model-00007-of-00007.safetensors: 64%|βββββββ | 2.72G/4.25G [00:06<00:03, 500MB/s][A model-00007-of-00007.safetensors: 76%|ββββββββ | 3.24G/4.25G [00:07<00:01, 506MB/s][A model-00007-of-00007.safetensors: 91%|βββββββββ | 3.85G/4.25G [00:08<00:00, 538MB/s][A model-00007-of-00007.safetensors: 100%|ββββββββββ| 4.25G/4.25G [00:08<00:00, 484MB/s] Downloading shards: 100%|ββββββββββ| 7/7 [01:06<00:00, 9.59s/it][A Downloading shards: 100%|ββββββββββ| 7/7 [01:06<00:00, 9.45s/it] Traceback (most recent call last): File "/home/user/app/app_dialogue.py", line 31, in <module> "idefics2-8b-chatty": Idefics2ForConditionalGeneration.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3544, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/models/idefics2/modeling_idefics2.py", line 1385, in _autoset_attn_implementation config = super()._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1454, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1555, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co/docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2.
Container logs:
Fetching error logs...