runtime error

Exit code: 1. Reason: rialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(model_path, map_location="cpu") vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt token : custom model : /home/user/.cache/huggingface/hub/models--SWivid--F5-TTS/snapshots/d6bd6c3c3ec65c0a3ef25a6d3d09658c5e2817fd/F5TTS_Base/model_1200000.safetensors vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt token : custom model : /home/user/.cache/huggingface/hub/models--SWivid--E2-TTS/snapshots/851141880b5ca38050025e98dfdee27dc553f86e/E2TTS_Base/model_1200000.safetensors Downloading shards: 0%| | 0/2 [00:00<?, ?it/s] Downloading shards: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:11<00:11, 11.87s/it] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:18<00:00, 8.63s/it] Downloading shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:18<00:00, 9.12s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 45839.39it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 560, in <module> chat_model_state = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4303, in from_pretrained dispatch_model(model, **device_map_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 496, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.

Container logs:

Fetching error logs...