runtime error
odel.safetensors: 91%|█████████ | 1.22G/1.34G [00:20<00:02, 49.8MB/s] Downloading model.safetensors: 92%|█████████▏| 1.23G/1.34G [00:20<00:01, 57.5MB/s] Downloading model.safetensors: 92%|█████████▏| 1.24G/1.34G [00:20<00:02, 44.6MB/s] Downloading model.safetensors: 94%|█████████▍| 1.26G/1.34G [00:21<00:01, 49.4MB/s] Downloading model.safetensors: 95%|█████████▍| 1.27G/1.34G [00:21<00:01, 49.8MB/s] Downloading model.safetensors: 96%|█████████▌| 1.29G/1.34G [00:21<00:00, 55.8MB/s] Downloading model.safetensors: 98%|█████████▊| 1.31G/1.34G [00:21<00:00, 56.2MB/s] Downloading model.safetensors: 99%|█████████▉| 1.33G/1.34G [00:22<00:00, 72.3MB/s] Downloading model.safetensors: 100%|██████████| 1.34G/1.34G [00:22<00:00, 60.0MB/s] Some weights of the model checkpoint at bert-large-uncased-whole-word-masking-finetuned-squad were not used when initializing BertForQuestionAnswering: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight'] - This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "/home/user/app/app.py", line 10, in <module> question_answerer = pipeline("question-answering", model=model) File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 904, in pipeline raise Exception( Exception: Impossible to guess which tokenizer to use. Please provide a PreTrainedTokenizer class or a path/identifier to a pretrained tokenizer.
Container logs:
Fetching error logs...