metadata
license: mit
tags:
- generated_from_trainer
model-index:
- name: mlcovid19-classifier
results: []
mlcovid19-classifier
This model is a fine-tuned version of oscarwu/mlcovid19-classifier on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4565
- F1 Macro: 0.6596
- F1 Misinformation: 0.8776
- F1 Factual: 0.8823
- F1 Other: 0.2188
- Prec Macro: 0.7181
- Prec Misinformation: 0.834
- Prec Factual: 0.8952
- Prec Other: 0.4251
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2165
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
---|---|---|---|---|---|---|---|---|---|---|---|
1.1986 | 0.94 | 16 | 1.0988 | 0.6120 | 0.8270 | 0.7623 | 0.2466 | 0.6567 | 0.7636 | 0.8578 | 0.3487 |
1.1885 | 1.94 | 32 | 1.0851 | 0.6163 | 0.8271 | 0.7645 | 0.2574 | 0.6588 | 0.7660 | 0.8557 | 0.3547 |
1.1538 | 2.94 | 48 | 1.0625 | 0.6198 | 0.8275 | 0.7683 | 0.2635 | 0.6593 | 0.7695 | 0.8521 | 0.3565 |
1.1386 | 3.94 | 64 | 1.0307 | 0.6235 | 0.8259 | 0.7722 | 0.2725 | 0.6564 | 0.7738 | 0.8452 | 0.3502 |
1.0935 | 4.94 | 80 | 0.9911 | 0.6276 | 0.8259 | 0.7797 | 0.2771 | 0.6549 | 0.7803 | 0.8392 | 0.3452 |
1.055 | 5.94 | 96 | 0.9445 | 0.6304 | 0.8271 | 0.7912 | 0.2730 | 0.6521 | 0.7893 | 0.8344 | 0.3327 |
0.9925 | 6.94 | 112 | 0.8945 | 0.6340 | 0.8270 | 0.8001 | 0.2749 | 0.6518 | 0.7976 | 0.8251 | 0.3327 |
0.9446 | 7.94 | 128 | 0.8448 | 0.6390 | 0.8303 | 0.8106 | 0.2760 | 0.6545 | 0.8088 | 0.8186 | 0.3360 |
0.8813 | 8.94 | 144 | 0.7970 | 0.6448 | 0.8355 | 0.8238 | 0.2752 | 0.6598 | 0.8185 | 0.8214 | 0.3395 |
0.8259 | 9.94 | 160 | 0.7475 | 0.6480 | 0.8405 | 0.8330 | 0.2704 | 0.6644 | 0.8243 | 0.8256 | 0.3434 |
0.7721 | 10.94 | 176 | 0.6971 | 0.6532 | 0.8483 | 0.8430 | 0.2684 | 0.6746 | 0.8281 | 0.8375 | 0.3583 |
0.7107 | 11.94 | 192 | 0.6542 | 0.6510 | 0.8527 | 0.8496 | 0.2507 | 0.6765 | 0.8290 | 0.8448 | 0.3557 |
0.6742 | 12.94 | 208 | 0.6126 | 0.6527 | 0.8554 | 0.8544 | 0.2484 | 0.6793 | 0.8298 | 0.8521 | 0.3560 |
0.6296 | 13.94 | 224 | 0.5735 | 0.6560 | 0.8603 | 0.8586 | 0.2491 | 0.6902 | 0.8298 | 0.8602 | 0.3804 |
0.5947 | 14.94 | 240 | 0.5416 | 0.6592 | 0.8641 | 0.8624 | 0.2512 | 0.6986 | 0.8299 | 0.8689 | 0.3970 |
0.5728 | 15.94 | 256 | 0.5164 | 0.6584 | 0.8678 | 0.8674 | 0.2402 | 0.7028 | 0.8312 | 0.8745 | 0.4026 |
0.5424 | 16.94 | 272 | 0.4950 | 0.6620 | 0.8711 | 0.8720 | 0.2428 | 0.7110 | 0.8315 | 0.8836 | 0.4178 |
0.5277 | 17.94 | 288 | 0.4798 | 0.6594 | 0.8727 | 0.8751 | 0.2305 | 0.7107 | 0.8316 | 0.8874 | 0.4130 |
0.5204 | 18.94 | 304 | 0.4679 | 0.6613 | 0.8749 | 0.8767 | 0.2323 | 0.7183 | 0.8335 | 0.8868 | 0.4346 |
0.5061 | 19.94 | 320 | 0.4565 | 0.6596 | 0.8776 | 0.8823 | 0.2188 | 0.7181 | 0.834 | 0.8952 | 0.4251 |
Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1