AdamCodd commited on
Commit
050686f
1 Parent(s): 1ebf9ac

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -0
README.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - AdamCodd/emotion-balanced
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - recall
11
+ - precision
12
+ widget:
13
+ - text: "He looked out of the rain-streaked window, lost in thought, the faintest hint of melancholy in his eyes, as he remembered moments from a distant past."
14
+ example_title: "Sadness"
15
+ - text: "As she strolled through the park, a soft smile played on her lips, and her heart felt lighter with each step, appreciating the simple beauty of nature."
16
+ example_title: "Joy"
17
+ - text: "Their fingers brushed lightly as they exchanged a knowing glance, a subtle connection that spoke volumes about the deep affection they held for each other."
18
+ example_title: "Love"
19
+ - text: "She clenched her fists and took a deep breath, trying to suppress the simmering frustration that welled up when her ideas were dismissed without consideration."
20
+ example_title: "Anger"
21
+ - text: "In the quiet of the night, the gentle rustling of leaves outside her window sent shivers down her spine, leaving her feeling uneasy and vulnerable."
22
+ example_title: "Fear"
23
+ - text: "Upon opening the old dusty book, a delicate, hand-painted map fell out, revealing hidden treasures she never expected to find."
24
+ example_title: "Surprise"
25
+ base_model: distilbert-base-uncased
26
+ model-index:
27
+ - name: distilbert-base-uncased-finetuned-emotion-balanced
28
+ results:
29
+ - task:
30
+ type: text-classification
31
+ name: Text Classification
32
+ dataset:
33
+ name: emotion
34
+ type: emotion
35
+ args: default
36
+ metrics:
37
+ - type: accuracy
38
+ value: 0.9354
39
+ name: Accuracy
40
+ - type: loss
41
+ value: 0.1809
42
+ name: Loss
43
+ - type: f1
44
+ value: 0.9354946613311768
45
+ name: F1
46
+ ---
47
+
48
+ # tinybert-emotion
49
+
50
+ This model is a fine-tuned version of [bert-tiny](prajjwal1/bert-tiny) on the [emotion balanced dataset](https://huggingface.co/datasets/AdamCodd/emotion-balanced).
51
+ It achieves the following results on the evaluation set:
52
+ - Loss: 0.1809
53
+ - Accuracy: 0.9354
54
+
55
+ ## Model description
56
+
57
+ TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (and DistilBERT is 40% smaller and 1.6 times faster than BERT). The model has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.
58
+
59
+ ## Intended uses & limitations
60
+
61
+ This model is not as accurate as the [distilbert-emotion-balanced](AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced) since speed was the focus, so it can misinterpret complex sentences. Despite this, its performance is quite good and should be more than enough for most use cases.
62
+
63
+ Usage:
64
+ ```python
65
+ from transformers import pipeline
66
+
67
+ # Create the pipeline
68
+ emotion_classifier = pipeline('text-classification', model='AdamCodd/tinybert-emotion-balanced')
69
+
70
+ # Now you can use the pipeline to classify emotions
71
+ result = emotion_classifier("We are delighted that you will be coming to visit us. It will be so nice to have you here.")
72
+ print(result)
73
+ #[{'label': 'joy', 'score': 0.9895486831665039}]
74
+ ```
75
+ This model faces challenges in accurately categorizing negative sentences, as well as those containing elements of sarcasm or irony. These limitations are largely attributable to TinyBERT's constrained capabilities in semantic understanding. Although the model is generally proficient in emotion detection tasks, it may lack the nuance necessary for interpreting complex emotional nuances.
76
+
77
+ ## Training and evaluation data
78
+
79
+ More information needed
80
+
81
+ ## Training procedure
82
+
83
+ ### Training hyperparameters
84
+
85
+ The following hyperparameters were used during training:
86
+ - learning_rate: 3e-05
87
+ - train_batch_size: 32
88
+ - eval_batch_size: 64
89
+ - seed: 1270
90
+ - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
91
+ - lr_scheduler_type: linear
92
+ - lr_scheduler_warmup_steps: 150
93
+ - num_epochs: 10
94
+ - weight_decay: 0.01
95
+
96
+ ### Training results
97
+
98
+ precision recall f1-score support
99
+
100
+ sadness 0.9733 0.9245 0.9482 1496
101
+ joy 0.9651 0.8864 0.9240 1496
102
+ love 0.9127 0.9786 0.9445 1496
103
+ anger 0.9479 0.9365 0.9422 1496
104
+ fear 0.9213 0.9004 0.9108 1496
105
+ surprise 0.9016 0.9866 0.9422 1496
106
+
107
+ accuracy 0.9355 8976
108
+ macro avg 0.9370 0.9355 0.9353 8976
109
+ weighted avg 0.9370 0.9355 0.9353 8976
110
+
111
+ test_acc: 0.9354946613311768
112
+ test_loss: 0.1809326708316803
113
+
114
+ ### Framework versions
115
+
116
+ - Transformers 4.33.0
117
+ - Pytorch lightning 2.0.8
118
+ - Tokenizers 0.13.3
119
+
120
+ If you want to support me, you can [here](https://ko-fi.com/adamcodd).