runtime error

s. 2023-12-19 11:21:18.332805: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT No model was supplied, defaulted to distilbert-base-cased-distilled-squad and revision 626af31 (https://huggingface.co/distilbert-base-cased-distilled-squad). Using a pipeline without specifying a model name and revision in production is not recommended. config.json: 0%| | 0.00/473 [00:00<?, ?B/s] config.json: 100%|██████████| 473/473 [00:00<00:00, 2.36MB/s] model.safetensors: 0%| | 0.00/261M [00:00<?, ?B/s] model.safetensors: 8%|▊ | 21.0M/261M [00:01<00:13, 17.9MB/s] model.safetensors: 100%|█████████▉| 261M/261M [00:01<00:00, 205MB/s] All PyTorch model weights were used when initializing TFDistilBertForQuestionAnswering. All the weights of TFDistilBertForQuestionAnswering were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFDistilBertForQuestionAnswering for predictions without further training. tokenizer_config.json: 0%| | 0.00/29.0 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 29.0/29.0 [00:00<00:00, 163kB/s] vocab.txt: 0%| | 0.00/213k [00:00<?, ?B/s] vocab.txt: 100%|██████████| 213k/213k [00:00<00:00, 41.7MB/s] tokenizer.json: 0%| | 0.00/436k [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 436k/436k [00:00<00:00, 78.7MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 23, in <module> gr.File(type="pdf"), File "/home/user/.local/lib/python3.10/site-packages/gradio/component_meta.py", line 155, in wrapper return fn(self, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/gradio/components/file.py", line 96, in __init__ raise ValueError( ValueError: Invalid value for parameter `type`: pdf. Please choose from one of: ['filepath', 'binary']

Container logs:

Fetching error logs...