tasinhoque commited on
Commit
825bd0d
·
1 Parent(s): ea3aa2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -44
README.md CHANGED
@@ -1,51 +1,53 @@
1
  ---
2
  license: mit
3
  tags:
4
- - generated_from_trainer
5
  datasets:
6
- - go_emotions
7
  metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - f1
12
  model-index:
13
- - name: roberta-large-go-emotions-3
14
- results:
15
- - task:
16
- name: Text Classification
17
- type: text-classification
18
- dataset:
19
- name: go_emotions
20
- type: go_emotions
21
- args: simplified
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.44452635458901585
26
- - name: Precision
27
- type: precision
28
- value: 0.5241481690233145
29
- - name: Recall
30
- type: recall
31
- value: 0.532779745019394
32
- - name: F1
33
- type: f1
34
- value: 0.5160548869396229
 
 
 
 
 
 
35
  ---
36
 
37
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
38
  should probably proofread and complete it, then remove this comment. -->
39
 
40
- # roberta-large-go-emotions-3
41
 
42
- This model is a fine-tuned version of [tasinhoque/roberta-large-go-emotions](https://huggingface.co/tasinhoque/roberta-large-go-emotions) on the go_emotions dataset.
43
- It achieves the following results on the evaluation set:
44
- - Loss: 0.0956
45
- - Accuracy: 0.4445
46
- - Precision: 0.5241
47
- - Recall: 0.5328
48
- - F1: 0.5161
49
 
50
  ## Model description
51
 
@@ -64,26 +66,29 @@ More information needed
64
  ### Training hyperparameters
65
 
66
  The following hyperparameters were used during training:
 
67
  - learning_rate: 5e-05
68
  - train_batch_size: 128
69
  - eval_batch_size: 128
70
  - seed: 42
71
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
72
  - lr_scheduler_type: linear
73
- - num_epochs: 3
74
 
75
  ### Training results
76
 
77
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
78
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
79
- | No log | 1.0 | 340 | 0.0862 | 0.4292 | 0.5012 | 0.5558 | 0.5208 |
80
- | 0.0597 | 2.0 | 680 | 0.0924 | 0.4329 | 0.5164 | 0.5362 | 0.5151 |
81
- | 0.0597 | 3.0 | 1020 | 0.0956 | 0.4445 | 0.5241 | 0.5328 | 0.5161 |
82
-
 
 
83
 
84
  ### Framework versions
85
 
86
  - Transformers 4.20.1
87
  - Pytorch 1.12.0
88
  - Datasets 2.1.0
89
- - Tokenizers 0.12.1
 
1
  ---
2
  license: mit
3
  tags:
4
+ - generated_from_trainer
5
  datasets:
6
+ - go_emotions
7
  metrics:
8
+ - f1
 
 
 
9
  model-index:
10
+ - name: roberta-large-go-emotions-3
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: go_emotions
17
+ type: multilabel_classification
18
+ config: simplified
19
+ split: test
20
+ args: simplified
21
+ metrics:
22
+ - name: F1
23
+ type: f1
24
+ value: 0.5204
25
+ - task:
26
+ name: Text Classification
27
+ type: text-classification
28
+ dataset:
29
+ name: go_emotions
30
+ type: multilabel_classification
31
+ config: simplified
32
+ split: validation
33
+ args: simplified
34
+ metrics:
35
+ - name: F1
36
+ type: f1
37
+ value: 0.5208
38
  ---
39
 
40
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
  should probably proofread and complete it, then remove this comment. -->
42
 
43
+ # roberta-large-go-emotions-2
44
 
45
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset. It achieves the following results on the test set (with a threshold of 0.15):
46
+
47
+ - Accuracy: 0.4363
48
+ - Precision: 0.4955
49
+ - Recall: 0.5655
50
+ - F1: 0.5204
 
51
 
52
  ## Model description
53
 
 
66
  ### Training hyperparameters
67
 
68
  The following hyperparameters were used during training:
69
+
70
  - learning_rate: 5e-05
71
  - train_batch_size: 128
72
  - eval_batch_size: 128
73
  - seed: 42
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
+ - num_epochs: 6
77
 
78
  ### Training results
79
 
80
+ | Training Loss | Epoch | Validation Loss | Accuracy | Precision | Recall | F1 |
81
+ | ------------- | ----- | --------------- | -------- | --------- | ------ | ------ |
82
+ | No log | 1.0 | 0.0889 | 0.4043 | 0.4807 | 0.4568 | 0.4446 |
83
+ | 0.1062 | 2.0 | 0.0828 | 0.4113 | 0.4608 | 0.5363 | 0.4868 |
84
+ | 0.1062 | 3.0 | 0.0813 | 0.4201 | 0.5198 | 0.5612 | 0.5227 |
85
+ | No log | 1.0 | 0.0862 | 0.4292 | 0.5012 | 0.5558 | 0.5208 |
86
+ | 0.0597 | 2.0 | 0.0924 | 0.4329 | 0.5164 | 0.5362 | 0.5151 |
87
+ | 0.0597 | 3.0 | 0.0956 | 0.4445 | 0.5241 | 0.5328 | 0.5161 |
88
 
89
  ### Framework versions
90
 
91
  - Transformers 4.20.1
92
  - Pytorch 1.12.0
93
  - Datasets 2.1.0
94
+ - Tokenizers 0.12.1