runtime error

different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2023-06-21 10:40:34.691740: I tensorflow/tsl/cuda/] Could not find cuda drivers on your machine, GPU will not be used. 2023-06-21 10:40:34.729410: I tensorflow/core/platform/] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-06-21 10:40:35.373177: W tensorflow/compiler/tf2tensorrt/utils/] TF-TRT Warning: Could not find TensorRT Downloading: 0%| | 0.00/546 [00:00<?, ?B/s] Downloading: 100%|██████████| 546/546 [00:00<00:00, 1.16MB/s] Downloading: 0%| | 0.00/905k [00:00<?, ?B/s] Downloading: 100%|██████████| 905k/905k [00:00<00:00, 113MB/s] Downloading: 0%| | 0.00/282 [00:00<?, ?B/s] Downloading: 100%|██████████| 282/282 [00:00<00:00, 591kB/s] Traceback (most recent call last): File "", line 18, in <module> tokenizer = AutoTokenizer.from_pretrained( File "/home/user/.local/lib/python3.8/site-packages/transformers/", line 341, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/user/.local/lib/python3.8/site-packages/transformers/", line 1623, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/user/.local/lib/python3.8/site-packages/transformers/", line 948, in cached_path output_path = get_from_cache( File "/home/user/.local/lib/python3.8/site-packages/transformers/", line 1124, in get_from_cache raise ValueError( ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.

Container logs:

Fetching error logs...