File size: 3,977 Bytes
07280ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- hate_speech_filipino
metrics:
- accuracy
- f1
model-index:
- name: scenario-kd-from-pre-finetune-silver-div-2-data-hate_speech_filipino-model-xlm-r
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# scenario-kd-from-pre-finetune-silver-div-2-data-hate_speech_filipino-model-xlm-r

This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the hate_speech_filipino dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4122
- Accuracy: 0.7810
- F1: 0.7699

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6969

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log        | 0.32  | 100  | 1.4902          | 0.7143   | 0.7119 |
| No log        | 0.64  | 200  | 1.0099          | 0.7379   | 0.7416 |
| No log        | 0.96  | 300  | 0.9000          | 0.7531   | 0.7410 |
| No log        | 1.28  | 400  | 0.7416          | 0.7595   | 0.7548 |
| 1.3478        | 1.6   | 500  | 0.7405          | 0.7703   | 0.7495 |
| 1.3478        | 1.92  | 600  | 0.6557          | 0.7743   | 0.7569 |
| 1.3478        | 2.24  | 700  | 0.6338          | 0.7661   | 0.7656 |
| 1.3478        | 2.56  | 800  | 0.6421          | 0.7682   | 0.7429 |
| 1.3478        | 2.88  | 900  | 0.5503          | 0.7746   | 0.7649 |
| 0.667         | 3.19  | 1000 | 0.5596          | 0.7741   | 0.7623 |
| 0.667         | 3.51  | 1100 | 0.6638          | 0.7729   | 0.7365 |
| 0.667         | 3.83  | 1200 | 0.5504          | 0.7798   | 0.7545 |
| 0.667         | 4.15  | 1300 | 0.5163          | 0.7769   | 0.7613 |
| 0.667         | 4.47  | 1400 | 0.6817          | 0.7498   | 0.7641 |
| 0.4362        | 4.79  | 1500 | 0.5097          | 0.7717   | 0.7725 |
| 0.4362        | 5.11  | 1600 | 0.5382          | 0.7765   | 0.7460 |
| 0.4362        | 5.43  | 1700 | 0.5237          | 0.7800   | 0.7493 |
| 0.4362        | 5.75  | 1800 | 0.4712          | 0.7786   | 0.7552 |
| 0.4362        | 6.07  | 1900 | 0.4671          | 0.7755   | 0.7570 |
| 0.3401        | 6.39  | 2000 | 0.4376          | 0.7859   | 0.7752 |
| 0.3401        | 6.71  | 2100 | 0.4753          | 0.7706   | 0.7720 |
| 0.3401        | 7.03  | 2200 | 0.4661          | 0.7854   | 0.7706 |
| 0.3401        | 7.35  | 2300 | 0.4555          | 0.7774   | 0.7605 |
| 0.3401        | 7.67  | 2400 | 0.4362          | 0.7812   | 0.7683 |
| 0.2844        | 7.99  | 2500 | 0.4511          | 0.7864   | 0.7734 |
| 0.2844        | 8.31  | 2600 | 0.4433          | 0.7767   | 0.7707 |
| 0.2844        | 8.63  | 2700 | 0.5001          | 0.7656   | 0.7699 |
| 0.2844        | 8.95  | 2800 | 0.4337          | 0.7810   | 0.7723 |
| 0.2844        | 9.27  | 2900 | 0.4145          | 0.7812   | 0.7668 |
| 0.2436        | 9.58  | 3000 | 0.4143          | 0.7840   | 0.7698 |
| 0.2436        | 9.9   | 3100 | 0.3993          | 0.7786   | 0.7687 |
| 0.2436        | 10.22 | 3200 | 0.4030          | 0.7899   | 0.7734 |
| 0.2436        | 10.54 | 3300 | 0.4054          | 0.7826   | 0.7666 |
| 0.2436        | 10.86 | 3400 | 0.3996          | 0.7762   | 0.7716 |
| 0.2246        | 11.18 | 3500 | 0.4122          | 0.7810   | 0.7699 |


### Framework versions

- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.14.5
- Tokenizers 0.13.3