--- license: mit base_model: xlm-roberta-base tags: - generated_from_trainer datasets: - hate_speech_filipino metrics: - accuracy - f1 model-index: - name: scenario-kd-from-scratch-gold-silver-data-hate_speech_filipino-model-xlm-roberta results: [] --- # scenario-kd-from-scratch-gold-silver-data-hate_speech_filipino-model-xlm-roberta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the hate_speech_filipino dataset. It achieves the following results on the evaluation set: - Loss: 1.7729 - Accuracy: 0.7559 - F1: 0.7334 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6969 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 0.32 | 100 | 3.1640 | 0.6628 | 0.6705 | | No log | 0.64 | 200 | 2.7957 | 0.7136 | 0.6553 | | No log | 0.96 | 300 | 2.4898 | 0.7141 | 0.7235 | | No log | 1.28 | 400 | 2.3972 | 0.7353 | 0.6774 | | 3.1684 | 1.6 | 500 | 2.1905 | 0.7476 | 0.7243 | | 3.1684 | 1.92 | 600 | 2.1041 | 0.7481 | 0.7406 | | 3.1684 | 2.24 | 700 | 2.1459 | 0.7481 | 0.7084 | | 3.1684 | 2.56 | 800 | 2.3199 | 0.7457 | 0.6848 | | 3.1684 | 2.88 | 900 | 2.0306 | 0.7422 | 0.7461 | | 2.009 | 3.19 | 1000 | 2.0210 | 0.7623 | 0.7414 | | 2.009 | 3.51 | 1100 | 2.1153 | 0.7554 | 0.7045 | | 2.009 | 3.83 | 1200 | 1.9246 | 0.7583 | 0.7452 | | 2.009 | 4.15 | 1300 | 1.9316 | 0.7611 | 0.7300 | | 2.009 | 4.47 | 1400 | 2.5109 | 0.7547 | 0.7107 | | 1.5147 | 4.79 | 1500 | 2.1018 | 0.7339 | 0.7465 | | 1.5147 | 5.11 | 1600 | 2.5402 | 0.7344 | 0.6409 | | 1.5147 | 5.43 | 1700 | 1.9100 | 0.7602 | 0.7427 | | 1.5147 | 5.75 | 1800 | 2.0015 | 0.7519 | 0.7469 | | 1.5147 | 6.07 | 1900 | 1.9725 | 0.7389 | 0.7376 | | 1.2233 | 6.39 | 2000 | 1.8734 | 0.7545 | 0.7499 | | 1.2233 | 6.71 | 2100 | 1.7899 | 0.7677 | 0.7478 | | 1.2233 | 7.03 | 2200 | 1.9712 | 0.7583 | 0.7507 | | 1.2233 | 7.35 | 2300 | 2.0334 | 0.7545 | 0.7049 | | 1.2233 | 7.67 | 2400 | 1.7572 | 0.7583 | 0.7439 | | 1.0785 | 7.99 | 2500 | 1.9824 | 0.7455 | 0.7361 | | 1.0785 | 8.31 | 2600 | 2.1105 | 0.7306 | 0.7505 | | 1.0785 | 8.63 | 2700 | 1.8427 | 0.7521 | 0.7451 | | 1.0785 | 8.95 | 2800 | 1.9739 | 0.7569 | 0.7141 | | 1.0785 | 9.27 | 2900 | 1.8361 | 0.7557 | 0.7401 | | 0.9726 | 9.58 | 3000 | 1.8516 | 0.7564 | 0.7216 | | 0.9726 | 9.9 | 3100 | 2.0626 | 0.7346 | 0.7495 | | 0.9726 | 10.22 | 3200 | 1.8399 | 0.7651 | 0.7407 | | 0.9726 | 10.54 | 3300 | 1.9179 | 0.7609 | 0.7201 | | 0.9726 | 10.86 | 3400 | 1.7367 | 0.7625 | 0.7328 | | 0.9237 | 11.18 | 3500 | 2.0266 | 0.7342 | 0.7462 | | 0.9237 | 11.5 | 3600 | 2.3553 | 0.7094 | 0.7371 | | 0.9237 | 11.82 | 3700 | 1.7729 | 0.7559 | 0.7334 | ### Framework versions - Transformers 4.33.3 - Pytorch 2.0.1 - Datasets 2.14.5 - Tokenizers 0.13.3