runtime error

Exit code: 1. Reason: ion.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(model_path, map_location="cpu") vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt tokenizer : custom model : /home/user/.cache/huggingface/hub/models--SWivid--F5-TTS/snapshots/4dcc16f297f2ff98a17b3726b16f5de5a5e45672/F5TTS_Base/model_1200000.safetensors vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt tokenizer : custom model : /home/user/.cache/huggingface/hub/models--SWivid--E2-TTS/snapshots/98016df3e24487aad803aff506335caba8414195/E2TTS_Base/model_1200000.safetensors Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|█████ | 1/2 [00:12<00:12, 12.53s/it] Downloading shards: 100%|██████████| 2/2 [00:20<00:00, 10.00s/it] Downloading shards: 100%|██████████| 2/2 [00:20<00:00, 10.38s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|██████████| 2/2 [00:00<00:00, 42153.81it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 512, in <module> chat_model_state = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4303, in from_pretrained dispatch_model(model, **device_map_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 496, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.

Container logs:

Fetching error logs...