philschmid HF staff commited on
Commit
2683e51
1 Parent(s): 3560b32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -118
README.md CHANGED
@@ -11,124 +11,7 @@ datasets:
11
  - emotion
12
  metrics:
13
  - Accuracy, F1 Score
14
- model-index:
15
- - name: bhadresh-savani/distilbert-base-uncased-emotion
16
- results:
17
- - task:
18
- type: text-classification
19
- name: Text Classification
20
- dataset:
21
- name: emotion
22
- type: emotion
23
- config: default
24
- split: test
25
- metrics:
26
- - name: Accuracy
27
- type: accuracy
28
- value: 0.927
29
- verified: true
30
- - name: Precision Macro
31
- type: precision
32
- value: 0.8880230732280744
33
- verified: true
34
- - name: Precision Micro
35
- type: precision
36
- value: 0.927
37
- verified: true
38
- - name: Precision Weighted
39
- type: precision
40
- value: 0.9272902840835793
41
- verified: true
42
- - name: Recall Macro
43
- type: recall
44
- value: 0.8790126653780703
45
- verified: true
46
- - name: Recall Micro
47
- type: recall
48
- value: 0.927
49
- verified: true
50
- - name: Recall Weighted
51
- type: recall
52
- value: 0.927
53
- verified: true
54
- - name: F1 Macro
55
- type: f1
56
- value: 0.8825061528287809
57
- verified: true
58
- - name: F1 Micro
59
- type: f1
60
- value: 0.927
61
- verified: true
62
- - name: F1 Weighted
63
- type: f1
64
- value: 0.926876082854655
65
- verified: true
66
- - name: loss
67
- type: loss
68
- value: 0.17403268814086914
69
- verified: true
70
  ---
71
- # Distilbert-base-uncased-emotion
72
 
73
- ## Model description:
74
- [Distilbert](https://arxiv.org/abs/1910.01108) is created with knowledge distillation during the pre-training phase which reduces the size of a BERT model by 40%, while retaining 97% of its language understanding. It's smaller, faster than Bert and any other Bert-based model.
75
 
76
- [Distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters
77
- ```
78
- learning rate 2e-5,
79
- batch size 64,
80
- num_train_epochs=8,
81
- ```
82
-
83
- ## Model Performance Comparision on Emotion Dataset from Twitter:
84
-
85
- | Model | Accuracy | F1 Score | Test Sample per Second |
86
- | --- | --- | --- | --- |
87
- | [Distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) | 93.8 | 93.79 | 398.69 |
88
- | [Bert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/bert-base-uncased-emotion) | 94.05 | 94.06 | 190.152 |
89
- | [Roberta-base-emotion](https://huggingface.co/bhadresh-savani/roberta-base-emotion) | 93.95 | 93.97| 195.639 |
90
- | [Albert-base-v2-emotion](https://huggingface.co/bhadresh-savani/albert-base-v2-emotion) | 93.6 | 93.65 | 182.794 |
91
-
92
- ## How to Use the model:
93
- ```python
94
- from transformers import pipeline
95
- classifier = pipeline("text-classification",model='bhadresh-savani/distilbert-base-uncased-emotion', return_all_scores=True)
96
- prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
97
- print(prediction)
98
-
99
- """
100
- Output:
101
- [[
102
- {'label': 'sadness', 'score': 0.0006792712374590337},
103
- {'label': 'joy', 'score': 0.9959300756454468},
104
- {'label': 'love', 'score': 0.0009452480007894337},
105
- {'label': 'anger', 'score': 0.0018055217806249857},
106
- {'label': 'fear', 'score': 0.00041110432357527316},
107
- {'label': 'surprise', 'score': 0.0002288572577526793}
108
- ]]
109
- """
110
- ```
111
-
112
- ## Dataset:
113
- [Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
114
-
115
- ## Training procedure
116
- [Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
117
-
118
- ## Eval results
119
- ```json
120
- {
121
- 'test_accuracy': 0.938,
122
- 'test_f1': 0.937932884041714,
123
- 'test_loss': 0.1472451239824295,
124
- 'test_mem_cpu_alloc_delta': 0,
125
- 'test_mem_cpu_peaked_delta': 0,
126
- 'test_mem_gpu_alloc_delta': 0,
127
- 'test_mem_gpu_peaked_delta': 163454464,
128
- 'test_runtime': 5.0164,
129
- 'test_samples_per_second': 398.69
130
- }
131
- ```
132
-
133
- ## Reference:
134
- * [Natural Language Processing with Transformer By Lewis Tunstall, Leandro von Werra, Thomas Wolf](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/)
11
  - emotion
12
  metrics:
13
  - Accuracy, F1 Score
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
 
15
 
 
 
16
 
17
+ # Fork of [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion)