distilbert-base-uncased-finetuned-emotions
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
Training results
Epochs and loss:
Epoch | Training Loss | Validation Loss |
---|---|---|
1 | 0.761500 | No log |
2 | 0.193600 | No log |
3 | 0.132400 | No log |
4 | 0.101800 | No log |
5 | 0.085500 | No log |
6 | 0.068500 | No log |
7 | 0.055500 | No log |
8 | 0.042100 | No log |
9 | 0.034100 | No log |
10 | 0.027300 | No log |
11 | 0.022700 | No log |
12 | 0.020400 | No log |
13 | 0.015000 | No log |
14 | 0.015700 | No log |
15 | 0.012700 | No log |
16 | 0.014500 | No log |
17 | 0.012700 | No log |
18 | 0.011900 | No log |
19 | 0.009400 | No log |
20 | 0.009600 | No log |
21 | 0.008600 | No log |
22 | 0.008000 | No log |
23 | 0.007700 | No log |
24 | 0.007800 | No log |
25 | 0.006500 | No log |
26 | 0.007100 | No log |
27 | 0.005600 | No log |
28 | 0.004800 | No log |
29 | 0.005700 | No log |
30 | 0.004000 | No log |
10 highest loss:
label | predicted_label | loss | text | |
---|---|---|---|---|
882 | love | sadness | 12.565614 | i feel badly about reneging on my commitment t... |
1919 | fear | sadness | 11.828767 | i should admit when consuming alcohol myself i... |
1801 | love | sadness | 10.995152 | i feel that he was being overshadowed by the s... |
1950 | surprise | sadness | 10.986624 | i as representative of everything thats wrong ... |
415 | love | sadness | 10.301595 | im kind of embarrassed about feeling that way ... |
1124 | anger | sadness | 10.188601 | someone acting stupid in public |
929 | anger | joy | 10.099646 | i feel food smarter already and slightly annoy... |
1392 | anger | sadness | 9.992455 | i still dont know how i feel i hated getting w... |
1870 | joy | love | 9.959218 | i guess i feel betrayed because i admired him ... |
60 | love | joy | 9.808908 | i miss our talks our cuddling our kissing and ... |
10 lowest loss:
label | predicted_label | loss | text | |
---|---|---|---|---|
21 | sadness | sadness | 0.000019 | i feel try to tell me im ungrateful tell me im... |
369 | sadness | sadness | 0.000020 | i just need a few minutes to feel put upon and... |
1120 | sadness | sadness | 0.000020 | i am feeling a little disheartened |
1466 | sadness | sadness | 0.000020 | i feel so ungrateful to be wishing this pregna... |
625 | sadness | sadness | 0.000020 | i feel unwelcome in this town as if my time he... |
650 | sadness | sadness | 0.000020 | i am still feeling gloomy and down |
473 | sadness | sadness | 0.000020 | i have this mixed up kinda feeling and i reall... |
133 | sadness | sadness | 0.000020 | i and feel quite ungrateful for it but i m loo... |
368 | sadness | sadness | 0.000020 | i have to admit that i m feeling quite gloomy ... |
1295 | sadness | sadness | 0.000020 | i feel a little damaged |
Inference Example
# Download the model from the Hub
classifier = pipeline("text-classification", model="JakeClark/distilbert-base-uncased-finetuned-emotions")
custom_text = "When we say evil, we’re not exaggerating. It’s here."
preds = classifier(custom_text, return_all_scores=True)
# Plot the prediction
labels = ['sadness', 'joy', 'love', 'anger', 'fear', 'surprise']
preds_df = pd.DataFrame(preds[0])
plt.bar(labels, 100 * preds_df["score"], color='C0')
plt.title(f'"{custom_text}"')
plt.ylabel("Class probability (%)")
plt.show()
Framework versions
- Transformers 4.42.3
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for JakeClark/distilbert-base-uncased-finetuned-emotions
Base model
distilbert/distilbert-base-uncased