metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9240890586429673
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
verified: true
- name: Precision Macro
type: precision
value: 0.8929490072058283
verified: true
- name: Precision Micro
type: precision
value: 0.9205
verified: true
- name: Precision Weighted
type: precision
value: 0.9211200226240503
verified: true
- name: Recall Macro
type: recall
value: 0.8684334771873932
verified: true
- name: Recall Micro
type: recall
value: 0.9205
verified: true
- name: Recall Weighted
type: recall
value: 0.9205
verified: true
- name: F1 Macro
type: f1
value: 0.8773142078752364
verified: true
- name: F1 Micro
type: f1
value: 0.9205
verified: true
- name: F1 Weighted
type: f1
value: 0.9200675517901923
verified: true
- name: loss
type: loss
value: 0.2167544811964035
verified: true
distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.924
- F1: 0.9241
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
---|---|---|---|---|---|
0.8218 | 1.0 | 250 | 0.3165 | 0.9025 | 0.9001 |
0.2494 | 2.0 | 500 | 0.2186 | 0.924 | 0.9241 |
Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1