runtime error

Exit code: 1. Reason: ll be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`. warnings.warn( Downloading shards: 0%| | 0/3 [00:00<?, ?it/s] Downloading shards: 33%|███▎ | 1/3 [00:14<00:28, 14.32s/it] Downloading shards: 67%|██████▋ | 2/3 [00:27<00:13, 13.64s/it] Downloading shards: 100%|██████████| 3/3 [00:40<00:00, 13.34s/it] Downloading shards: 100%|██████████| 3/3 [00:40<00:00, 13.49s/it] Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 7, in <module> model = AutoModelForCausalLM.from_pretrained("ping98k/typhoon-7b-rag-instruct-th", device_map={"": 0}) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3502, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3926, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 805, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 384, in set_module_tensor_to_device new_value = value.to(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Container logs:

Fetching error logs...