--- license: mit base_model: facebook/xlm-v-base tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: scenario-TCR-XLMV-XCOPA-2_data-xcopa_all results: [] --- # scenario-TCR-XLMV-XCOPA-2_data-xcopa_all This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3263 - Accuracy: 0.5417 - F1: 0.5291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 34 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 0.38 | 5 | 0.6932 | 0.5075 | 0.4793 | | No log | 0.77 | 10 | 0.6932 | 0.5025 | 0.4749 | | No log | 1.15 | 15 | 0.6931 | 0.51 | 0.4878 | | No log | 1.54 | 20 | 0.6931 | 0.5192 | 0.4934 | | No log | 1.92 | 25 | 0.6931 | 0.51 | 0.4869 | | No log | 2.31 | 30 | 0.6931 | 0.5417 | 0.5089 | | No log | 2.69 | 35 | 0.6931 | 0.5533 | 0.5323 | | No log | 3.08 | 40 | 0.6931 | 0.5542 | 0.5352 | | No log | 3.46 | 45 | 0.6931 | 0.55 | 0.5329 | | No log | 3.85 | 50 | 0.6931 | 0.5475 | 0.5233 | | No log | 4.23 | 55 | 0.6931 | 0.5217 | 0.4884 | | No log | 4.62 | 60 | 0.6932 | 0.5283 | 0.5052 | | No log | 5.0 | 65 | 0.6931 | 0.5308 | 0.4969 | | No log | 5.38 | 70 | 0.6930 | 0.5217 | 0.4929 | | No log | 5.77 | 75 | 0.6930 | 0.53 | 0.5138 | | No log | 6.15 | 80 | 0.6931 | 0.5417 | 0.5259 | | No log | 6.54 | 85 | 0.6931 | 0.5525 | 0.5318 | | No log | 6.92 | 90 | 0.6931 | 0.5417 | 0.5054 | | No log | 7.31 | 95 | 0.6931 | 0.56 | 0.5319 | | No log | 7.69 | 100 | 0.6930 | 0.555 | 0.5291 | | No log | 8.08 | 105 | 0.6931 | 0.5192 | 0.4978 | | No log | 8.46 | 110 | 0.6931 | 0.5192 | 0.4943 | | No log | 8.85 | 115 | 0.6930 | 0.5383 | 0.5249 | | No log | 9.23 | 120 | 0.6930 | 0.5375 | 0.5244 | | No log | 9.62 | 125 | 0.6930 | 0.54 | 0.5192 | | No log | 10.0 | 130 | 0.6930 | 0.5617 | 0.5450 | | No log | 10.38 | 135 | 0.6930 | 0.54 | 0.5217 | | No log | 10.77 | 140 | 0.6929 | 0.5475 | 0.5241 | | No log | 11.15 | 145 | 0.6926 | 0.57 | 0.5559 | | No log | 11.54 | 150 | 0.6920 | 0.5658 | 0.5611 | | No log | 11.92 | 155 | 0.6896 | 0.5758 | 0.5661 | | No log | 12.31 | 160 | 0.6879 | 0.5275 | 0.5074 | | No log | 12.69 | 165 | 0.7069 | 0.4792 | 0.4435 | | No log | 13.08 | 170 | 0.6925 | 0.5358 | 0.5276 | | No log | 13.46 | 175 | 0.6941 | 0.5275 | 0.5082 | | No log | 13.85 | 180 | 0.7034 | 0.52 | 0.5052 | | No log | 14.23 | 185 | 0.7386 | 0.54 | 0.5208 | | No log | 14.62 | 190 | 0.7108 | 0.5242 | 0.4987 | | No log | 15.0 | 195 | 0.7418 | 0.5275 | 0.4978 | | No log | 15.38 | 200 | 0.8982 | 0.5292 | 0.5074 | | No log | 15.77 | 205 | 0.8073 | 0.5525 | 0.5398 | | No log | 16.15 | 210 | 1.0723 | 0.5633 | 0.5459 | | No log | 16.54 | 215 | 0.8696 | 0.5508 | 0.5293 | | No log | 16.92 | 220 | 0.8563 | 0.5533 | 0.5315 | | No log | 17.31 | 225 | 1.1253 | 0.5317 | 0.5147 | | No log | 17.69 | 230 | 1.0949 | 0.5483 | 0.5279 | | No log | 18.08 | 235 | 0.8768 | 0.5592 | 0.5482 | | No log | 18.46 | 240 | 1.1880 | 0.5558 | 0.5401 | | No log | 18.85 | 245 | 1.1781 | 0.5617 | 0.5434 | | No log | 19.23 | 250 | 0.9759 | 0.5508 | 0.5381 | | No log | 19.62 | 255 | 1.2402 | 0.5442 | 0.5321 | | No log | 20.0 | 260 | 1.2534 | 0.5308 | 0.5048 | | No log | 20.38 | 265 | 1.1987 | 0.5567 | 0.5382 | | No log | 20.77 | 270 | 1.1627 | 0.5425 | 0.5344 | | No log | 21.15 | 275 | 1.2200 | 0.5417 | 0.5133 | | No log | 21.54 | 280 | 1.4727 | 0.5342 | 0.5152 | | No log | 21.92 | 285 | 1.1766 | 0.55 | 0.5271 | | No log | 22.31 | 290 | 1.3099 | 0.5458 | 0.5265 | | No log | 22.69 | 295 | 1.3408 | 0.5558 | 0.5361 | | No log | 23.08 | 300 | 1.2960 | 0.5458 | 0.5164 | | No log | 23.46 | 305 | 1.3263 | 0.5417 | 0.5291 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.1.1+cu121 - Datasets 2.14.5 - Tokenizers 0.13.3