AdamCodd commited on
Commit
aaad9ac
β€’
1 Parent(s): 5d6eb1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md CHANGED
@@ -1,3 +1,95 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ model-index:
8
+ - name: distilbert-base-uncased-finetuned-emotion-balanced
9
+ results:
10
+ - task:
11
+ name: Text Classification
12
+ type: text-classification
13
+ dataset:
14
+ name: emotion-balanced
15
+ type: emotion
16
+ args: default
17
+ metrics:
18
+ - name: Accuracy
19
+ type: accuracy
20
+ value: 0.9521
21
+ - name: Loss
22
+ type: loss
23
+ value: 0.1216
24
+ - name: F1
25
+ type: f1
26
+ value: 0.9520944952964783
27
+ widget:
28
+ - text: Your actions were very caring.
29
+ example_title: Test sentence
30
  ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # distilbert-emotion
36
+
37
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.1216
40
+ - Accuracy: 0.9521
41
+
42
+ ## Model description
43
+
44
+ This emotion classifier has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 3e-05
60
+ - train_batch_size: 32
61
+ - eval_batch_size: 64
62
+ - seed: 1270
63
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
64
+ - lr_scheduler_type: linear
65
+ - lr_scheduler_warmup_steps: 150
66
+ - num_epochs: 1
67
+ - weight_decay: 0.01
68
+
69
+ ### Training results
70
+
71
+ precision recall f1-score support
72
+
73
+ sadness 0.9882 0.9485 0.9679 1496
74
+ joy 0.9956 0.9057 0.9485 1496
75
+ love 0.9256 0.9980 0.9604 1496
76
+ anger 0.9628 0.9519 0.9573 1496
77
+ fear 0.9348 0.9098 0.9221 1496
78
+ surprise 0.9160 0.9987 0.9555 1496
79
+
80
+ accuracy 0.9521 8976
81
+ macro avg 0.9538 0.9521 0.9520 8976
82
+ weighted avg 0.9538 0.9521 0.9520 8976
83
+
84
+ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
85
+ ┃ Test metric ┃ DataLoader 0 ┃
86
+ ┑━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
87
+ β”‚ test_acc β”‚ 0.9520944952964783 β”‚
88
+ β”‚ test_loss β”‚ 0.121663898229599 β”‚
89
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
90
+
91
+ ### Framework versions
92
+
93
+ - Transformers 4.33.1
94
+ - Pytorch lightning 2.0.8
95
+ - Tokenizers 0.13.3