File size: 2,529 Bytes
8c652a1
 
aaad9ac
 
 
 
5880dc0
 
 
aaad9ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5880dc0
 
8c652a1
aaad9ac
 
 
 
 
 
1e88d93
aaad9ac
 
 
 
 
 
1e88d93
aaad9ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ac8a2b2
 
 
 
 
aaad9ac
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: distilbert-base-uncased-finetuned-emotion-balanced
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion-balanced
      type: emotion
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9521
    - name: Loss
      type: loss
      value: 0.1216
    - name: F1
      type: f1
      value: 0.9520944952964783
widget:
- text: Your actions were very caring.
  example_title: Test sentence
datasets:
- AdamCodd/emotion-balanced
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# distilbert-emotion

This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [emotion balanced dataset](https://huggingface.co/datasets/AdamCodd/emotion-balanced).
It achieves the following results on the evaluation set:
- Loss: 0.1216
- Accuracy: 0.9521

## Model description

This emotion classifier has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 1270
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 1
- weight_decay: 0.01

### Training results

              precision    recall  f1-score   support

     sadness     0.9882    0.9485    0.9679      1496
         joy     0.9956    0.9057    0.9485      1496
        love     0.9256    0.9980    0.9604      1496
       anger     0.9628    0.9519    0.9573      1496
        fear     0.9348    0.9098    0.9221      1496
    surprise     0.9160    0.9987    0.9555      1496

    accuracy                         0.9521      8976
    macro avg    0.9538    0.9521    0.9520      8976
    weighted avg 0.9538    0.9521    0.9520      8976
    
    test_acc:     0.9520944952964783
    test_loss:    0.121663898229599

### Framework versions

- Transformers 4.33.1
- Pytorch lightning 2.0.8
- Tokenizers 0.13.3