tasinhoque commited on
Commit
394fc4f
1 Parent(s): b2539f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md CHANGED
@@ -1,3 +1,73 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - go_emotions
7
+ metrics:
8
+ - f1
9
+ model-index:
10
+ - name: roberta-large-goemotions
11
+ results:
12
+ - task:
13
+ name: Text Classification
14
+ type: text-classification
15
+ dataset:
16
+ name: go_emotions
17
+ type: multilabel_classification
18
+ config: simplified
19
+ split: test
20
+ args: simplified
21
+ metrics:
22
+ - name: F1
23
+ type: f1
24
+ value: 0.51
25
  ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # Text Classification GoEmotions
31
+
32
+ This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset.
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+
52
+ - learning_rate: 5e-05
53
+ - train_batch_size: 128
54
+ - eval_batch_size: 128
55
+ - seed: 42
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - num_epochs: 3
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Validation Loss | Accuracy | Precision | Recall | F1 |
63
+ | :-----------: | :---: | :-------------: | :------: | :-------: | :------: | :------: |
64
+ | No log | 1.0 | 0.088978 | 0.404349 | 0.480763 | 0.456827 | 0.444685 |
65
+ | 0.10620 | 2.0 | 0.082806 | 0.411353 | 0.460896 | 0.536386 | 0.486819 |
66
+ | 0.10620 | 3.0 | 0.081338 | 0.420199 | 0.519828 | 0.561297 | 0.522716 |
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.20.1
71
+ - Pytorch 1.12.0
72
+ - Datasets 2.1.0
73
+ - Tokenizers 0.12.1