ayameRushia commited on
Commit
951cf46
1 Parent(s): a27a884

End of training

Browse files
Files changed (1) hide show
  1. README.md +38 -2
README.md CHANGED
@@ -3,9 +3,26 @@ license: mit
3
  base_model: w11wo/indo-roberta-small
4
  tags:
5
  - generated_from_trainer
 
 
 
 
6
  model-index:
7
  - name: indo-roberta-small-finetuned-indonlu-smsa
8
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # indo-roberta-small-finetuned-indonlu-smsa
15
 
16
- This model is a fine-tuned version of [w11wo/indo-roberta-small](https://huggingface.co/w11wo/indo-roberta-small) on an unknown dataset.
 
 
 
17
 
18
  ## Model description
19
 
@@ -41,6 +61,22 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_warmup_steps: 2000
42
  - num_epochs: 10
43
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ### Framework versions
45
 
46
  - Transformers 4.38.2
 
3
  base_model: w11wo/indo-roberta-small
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - indonlu
8
+ metrics:
9
+ - accuracy
10
  model-index:
11
  - name: indo-roberta-small-finetuned-indonlu-smsa
12
+ results:
13
+ - task:
14
+ name: Text Classification
15
+ type: text-classification
16
+ dataset:
17
+ name: indonlu
18
+ type: indonlu
19
+ config: smsa
20
+ split: validation
21
+ args: smsa
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8809523809523809
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # indo-roberta-small-finetuned-indonlu-smsa
32
 
33
+ This model is a fine-tuned version of [w11wo/indo-roberta-small](https://huggingface.co/w11wo/indo-roberta-small) on the indonlu dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.4075
36
+ - Accuracy: 0.8810
37
 
38
  ## Model description
39
 
 
61
  - lr_scheduler_warmup_steps: 2000
62
  - num_epochs: 10
63
 
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | No log | 1.0 | 344 | 0.6672 | 0.7103 |
69
+ | 0.7802 | 2.0 | 688 | 0.4828 | 0.8111 |
70
+ | 0.4817 | 3.0 | 1032 | 0.4378 | 0.8278 |
71
+ | 0.4817 | 4.0 | 1376 | 0.3920 | 0.8540 |
72
+ | 0.3703 | 5.0 | 1720 | 0.4251 | 0.8524 |
73
+ | 0.2826 | 6.0 | 2064 | 0.3883 | 0.8659 |
74
+ | 0.2826 | 7.0 | 2408 | 0.3782 | 0.8698 |
75
+ | 0.2024 | 8.0 | 2752 | 0.3932 | 0.8698 |
76
+ | 0.1429 | 9.0 | 3096 | 0.4075 | 0.8810 |
77
+ | 0.1429 | 10.0 | 3440 | 0.4257 | 0.8738 |
78
+
79
+
80
  ### Framework versions
81
 
82
  - Transformers 4.38.2