tasinhoque commited on
Commit
2478acf
·
1 Parent(s): da9dfb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -42
README.md CHANGED
@@ -1,37 +1,40 @@
1
  ---
2
  license: mit
3
  tags:
4
- - generated_from_trainer
5
  datasets:
6
- - go_emotions
7
  metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - f1
12
  model-index:
13
- - name: roberta-large-go-emotions-2
14
- results:
15
- - task:
16
- name: Text Classification
17
- type: text-classification
18
- dataset:
19
- name: go_emotions
20
- type: go_emotions
21
- args: simplified
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.4432362698120162
26
- - name: Precision
27
- type: precision
28
- value: 0.5075947480171396
29
- - name: Recall
30
- type: recall
31
- value: 0.5249481075365657
32
- - name: F1
33
- type: f1
34
- value: 0.5111051084102828
 
 
 
 
 
 
35
  ---
36
 
37
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -39,13 +42,12 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  # roberta-large-go-emotions-2
41
 
42
- This model is a fine-tuned version of [tasinhoque/roberta-large-go-emotions-2](https://huggingface.co/tasinhoque/roberta-large-go-emotions-2) on the go_emotions dataset.
43
- It achieves the following results on the evaluation set:
44
- - Loss: 0.1029
45
- - Accuracy: 0.4432
46
- - Precision: 0.5076
47
- - Recall: 0.5249
48
- - F1: 0.5111
49
 
50
  ## Model description
51
 
@@ -64,22 +66,28 @@ More information needed
64
  ### Training hyperparameters
65
 
66
  The following hyperparameters were used during training:
 
67
  - learning_rate: 5e-05
68
  - train_batch_size: 128
69
  - eval_batch_size: 128
70
  - seed: 42
71
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
72
  - lr_scheduler_type: linear
73
- - num_epochs: 3
74
 
75
  ### Training results
76
 
77
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
78
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
79
- | No log | 1.0 | 340 | 0.0962 | 0.4648 | 0.5138 | 0.5277 | 0.5151 |
80
- | 0.0458 | 2.0 | 680 | 0.0962 | 0.4462 | 0.5257 | 0.5270 | 0.5203 |
81
- | 0.0458 | 3.0 | 1020 | 0.1029 | 0.4432 | 0.5076 | 0.5249 | 0.5111 |
82
-
 
 
 
 
 
83
 
84
  ### Framework versions
85
 
 
1
  ---
2
  license: mit
3
  tags:
4
+ - generated_from_trainer
5
  datasets:
6
+ - go_emotions
7
  metrics:
8
+ - f1
 
 
 
9
  model-index:
10
+ - name: roberta-large-go-emotions-2
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: go_emotions
17
+ type: multilabel_classification
18
+ config: simplified
19
+ split: test
20
+ args: simplified
21
+ metrics:
22
+ - name: F1
23
+ type: f1
24
+ value: 0.5180
25
+ - task:
26
+ name: Text Classification
27
+ type: text-classification
28
+ dataset:
29
+ name: go_emotions
30
+ type: multilabel_classification
31
+ config: simplified
32
+ split: validation
33
+ args: simplified
34
+ metrics:
35
+ - name: F1
36
+ type: f1
37
+ value: 0.5203
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
42
 
43
  # roberta-large-go-emotions-2
44
 
45
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset. It achieves the following results on the test set (with a threshold of 0.15):
46
+
47
+ - Accuracy: 0.44020
48
+ - Precision: 0.5041
49
+ - Recall: 0.5461
50
+ - F1: 0.5180
 
51
 
52
  ## Model description
53
 
 
66
  ### Training hyperparameters
67
 
68
  The following hyperparameters were used during training:
69
+
70
  - learning_rate: 5e-05
71
  - train_batch_size: 128
72
  - eval_batch_size: 128
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
+ - num_epochs: 9
77
 
78
  ### Training results
79
 
80
+ | Training Loss | Epoch | Validation Loss | Accuracy | Precision | Recall | F1 |
81
+ | ------------- | ----- | --------------- | -------- | --------- | ------ | ------ |
82
+ | No log | 1.0 | 0.0889 | 0.4043 | 0.4807 | 0.4568 | 0.4446 |
83
+ | 0.1062 | 2.0 | 0.0828 | 0.4113 | 0.4608 | 0.5363 | 0.4868 |
84
+ | 0.1062 | 3.0 | 0.0813 | 0.4201 | 0.5198 | 0.5612 | 0.5227 |
85
+ | No log | 4.0 | 0.0862 | 0.4292 | 0.5012 | 0.5558 | 0.5208 |
86
+ | 0.0597 | 5.0 | 0.0924 | 0.4329 | 0.5164 | 0.5362 | 0.5151 |
87
+ | 0.0597 | 6.0 | 0.0956 | 0.4445 | 0.5241 | 0.5328 | 0.5161 |
88
+ | No log | 7.0 | 0.0962 | 0.4648 | 0.5138 | 0.5277 | 0.5151 |
89
+ | 0.0458 | 8.0 | 0.0962 | 0.4462 | 0.5257 | 0.5270 | 0.5203 |
90
+ | 0.0458 | 9.0 | 0.1029 | 0.4432 | 0.5076 | 0.5249 | 0.5111 |
91
 
92
  ### Framework versions
93