runtime error

Space not ready. Reason: Error, exitCode: 1, message: None

Container logs:

2022-11-02 09:53:13.495217: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-02 09:53:13.612187: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-11-02 09:53:13.612222: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-02 09:53:13.649282: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-02 09:53:14.382877: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-11-02 09:53:14.382950: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-11-02 09:53:14.382963: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Moving 0 files to the new cache system

0it [00:00, ?it/s]
0it [00:00, ?it/s]

Downloading:   0%|          | 0.00/1.60k [00:00<?, ?B/s]
Downloading: 100%|██████████| 1.60k/1.60k [00:00<00:00, 2.38MB/s]

Downloading:   0%|          | 0.00/378M [00:00<?, ?B/s]
Downloading:   2%|▏         | 9.19M/378M [00:00<00:04, 91.9MB/s]
Downloading:   5%|▌         | 19.6M/378M [00:00<00:03, 98.9MB/s]
Downloading:   8%|▊         | 29.8M/378M [00:00<00:03, 101MB/s] 
Downloading:  11%|█         | 39.9M/378M [00:00<00:03, 88.9MB/s]
Downloading:  13%|█▎        | 49.9M/378M [00:00<00:03, 92.7MB/s]
Downloading:  16%|█▌        | 60.2M/378M [00:00<00:03, 96.0MB/s]
Downloading:  19%|█▊        | 70.6M/378M [00:00<00:03, 98.4MB/s]
Downloading:  21%|██▏       | 80.7M/378M [00:00<00:02, 99.2MB/s]
Downloading:  24%|██▍       | 90.6M/378M [00:00<00:02, 98.1MB/s]
Downloading:  27%|██▋       | 101M/378M [00:01<00:02, 101MB/s]  
Downloading:  30%|██▉       | 112M/378M [00:01<00:02, 101MB/s]
Downloading:  32%|███▏      | 122M/378M [00:01<00:02, 99.0MB/s]
Downloading:  35%|███▍      | 132M/378M [00:01<00:02, 100MB/s] 
Downloading:  38%|███▊      | 142M/378M [00:01<00:02, 81.1MB/s]
Downloading:  40%|███▉      | 151M/378M [00:01<00:02, 75.9MB/s]
Downloading:  42%|████▏     | 159M/378M [00:01<00:03, 64.7MB/s]
Downloading:  45%|████▌     | 170M/378M [00:01<00:02, 76.4MB/s]
Downloading:  48%|████▊     | 181M/378M [00:02<00:02, 85.1MB/s]
Downloading:  51%|█████     | 192M/378M [00:02<00:02, 91.2MB/s]
Downloading:  54%|█████▎    | 203M/378M [00:02<00:01, 94.6MB/s]
Downloading:  57%|█████▋    | 214M/378M [00:02<00:01, 99.8MB/s]
Downloading:  60%|█████▉    | 225M/378M [00:02<00:01, 103MB/s] 
Downloading:  63%|██████▎   | 236M/378M [00:02<00:01, 106MB/s]
Downloading:  66%|██████▌   | 248M/378M [00:02<00:01, 108MB/s]
Downloading:  69%|██████▊   | 259M/378M [00:02<00:01, 110MB/s]
Downloading:  71%|███████▏  | 270M/378M [00:02<00:01, 101MB/s]
Downloading:  74%|███████▍  | 281M/378M [00:02<00:00, 103MB/s]
Downloading:  77%|███████▋  | 291M/378M [00:03<00:00, 101MB/s]
Downloading:  80%|███████▉  | 302M/378M [00:03<00:00, 102MB/s]
Downloading:  83%|████████▎ | 313M/378M [00:03<00:00, 106MB/s]
Downloading:  86%|████████▌ | 324M/378M [00:03<00:00, 72.5MB/s]
Downloading:  89%|████████▊ | 335M/378M [00:03<00:00, 81.1MB/s]
Downloading:  91%|█████████ | 345M/378M [00:03<00:00, 72.2MB/s]
Downloading:  94%|█████████▍| 356M/378M [00:03<00:00, 81.7MB/s]
Downloading:  97%|█████████▋| 365M/378M [00:04<00:00, 63.6MB/s]
Downloading: 100%|█████████▉| 376M/378M [00:04<00:00, 73.6MB/s]
Downloading: 100%|██████████| 378M/378M [00:04<00:00, 87.5MB/s]
2022-11-02 09:53:19.660920: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-11-02 09:53:19.660949: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2022-11-02 09:53:19.660974: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (s-minhnd-speech-recognize-45398-7b5489cfd4-jhpw5): /proc/driver/nvidia/version does not exist
2022-11-02 09:53:19.661213: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

TFWav2Vec2ForCTC has backpropagation operations that are NOT supported on CPU. If you wish to train/fine-tine this model, you need a GPU or a TPU
2022-11-02 09:53:20.169533: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x563ae4c640c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-11-02 09:53:20.169572: I tensorflow/compiler/xla/service/service.cc:181]   StreamExecutor device (0): Host, Default Version
2022-11-02 09:53:20.178552: I tensorflow/compiler/jit/xla_compilation_cache.cc:476] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
All model checkpoint layers were used when initializing TFWav2Vec2ForCTC.

All the layers of TFWav2Vec2ForCTC were initialized from the model checkpoint at facebook/wav2vec2-base-960h.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFWav2Vec2ForCTC for predictions without further training.

Downloading:   0%|          | 0.00/163 [00:00<?, ?B/s]
Downloading: 100%|██████████| 163/163 [00:00<00:00, 325kB/s]

Downloading:   0%|          | 0.00/291 [00:00<?, ?B/s]
Downloading: 100%|██████████| 291/291 [00:00<00:00, 530kB/s]

Downloading:   0%|          | 0.00/85.0 [00:00<?, ?B/s]
Downloading: 100%|██████████| 85.0/85.0 [00:00<00:00, 140kB/s]

Downloading:   0%|          | 0.00/159 [00:00<?, ?B/s]
Downloading: 100%|██████████| 159/159 [00:00<00:00, 364kB/s]
Traceback (most recent call last):
  File "app.py", line 8, in <module>
    auto_speech_recog = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h")
  File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 873, in pipeline
    return pipeline_class(model=model, framework=framework, task=task, **kwargs)
  File "/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 125, in __init__
    if self.model.__class__ in MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING.values():
NameError: name 'MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING' is not defined