runtime error

|██████ | 199M/331M [00:02<00:02, 58.5MB/s] Downloading (…)e2d2268c3d7b1ff97663: 67%|██████▋ | 220M/331M [00:03<00:01, 69.7MB/s] Downloading (…)e2d2268c3d7b1ff97663: 73%|███████▎ | 241M/331M [00:03<00:01, 66.5MB/s] Downloading (…)e2d2268c3d7b1ff97663: 76%|███████▌ | 252M/331M [00:03<00:01, 66.3MB/s] Downloading (…)e2d2268c3d7b1ff97663: 82%|████████▏ | 273M/331M [00:03<00:00, 76.6MB/s] Downloading (…)e2d2268c3d7b1ff97663: 89%|████████▊ | 294M/331M [00:04<00:00, 70.7MB/s] Downloading (…)e2d2268c3d7b1ff97663: 92%|█████████▏| 304M/331M [00:04<00:00, 65.8MB/s] Downloading (…)e2d2268c3d7b1ff97663: 98%|█████████▊| 325M/331M [00:04<00:00, 77.2MB/s] Downloading (…)e2d2268c3d7b1ff97663: 100%|██████████| 331M/331M [00:04<00:00, 73.3MB/s] Some weights of the model checkpoint at distilroberta-base were not used when initializing RobertaForMaskedLM: ['roberta.pooler.dense.bias', 'roberta.pooler.dense.weight'] - This IS expected if you are initializing RobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Traceback (most recent call last): File "app.py", line 64, in <module> model.load_state_dict(torch.load(pretrained_path,map_location=torch.device('cpu'))) File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for EmoModel: Unexpected key(s) in state_dict: "base_model.embeddings.position_ids".

Container logs:

Fetching error logs...