runtime error

different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-06-21 10:40:34.691740: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used. 2023-06-21 10:40:34.729410: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-06-21 10:40:35.373177: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Downloading: 0%| | 0.00/546 [00:00<?, ?B/s] Downloading: 100%|██████████| 546/546 [00:00<00:00, 1.16MB/s] Downloading: 0%| | 0.00/905k [00:00<?, ?B/s] Downloading: 100%|██████████| 905k/905k [00:00<00:00, 113MB/s] Downloading: 0%| | 0.00/282 [00:00<?, ?B/s] Downloading: 100%|██████████| 282/282 [00:00<00:00, 591kB/s] Traceback (most recent call last): File "app.py", line 18, in <module> tokenizer = AutoTokenizer.from_pretrained( File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 341, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1623, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 948, in cached_path output_path = get_from_cache( File "/home/user/.local/lib/python3.8/site-packages/transformers/file_utils.py", line 1124, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

Container logs:

Fetching error logs...