File size: 2,847 Bytes
27022fd
 
 
 
83190d4
 
df5f0f0
27022fd
 
 
83190d4
 
 
 
 
27022fd
 
 
 
 
 
 
 
 
 
83190d4
27022fd
 
 
83190d4
 
 
 
27022fd
 
 
83190d4
27022fd
21af013
 
 
 
 
 
 
 
 
 
 
27022fd
 
 
 
 
 
 
83190d4
 
 
 
27022fd
 
 
83190d4
 
 
 
27022fd
83190d4
 
 
27022fd
 
 
 
 
 
83190d4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
tags:
- generated_from_keras_callback
- text-classification
- sentiment-analysis
base_model: distilbert-base-uncased
model-index:
- name: emotion-analysis-distilbert
  results: []
metrics:
- accuracy
- f1
- confusion_matrix
library_name: transformers
---

# emotion-analysis-distilbert

This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:


## Model description

The model is based on the DistilBERT architecture, a distilled version of the BERT model, which is suitable for tasks requiring efficient inference without sacrificing performance. This specific model has been fine-tuned to predict emotions from text inputs.

## Intended uses & limitations

This model is intended for text classification tasks, particularly sentiment analysis and emotion recognition, where input texts need to be categorized into predefined emotion categories. It can be used in various applications such as chatbots, social media sentiment analysis, and customer feedback analysis.

The model's performance may vary based on the diversity and complexity of the emotional expressions in the input data.
It may not generalize well to different domains or languages without further adaptation.

## Training and evaluation data

The model was trained and evaluated on the "emotion" dataset, which includes labeled examples for emotion classification. The dataset consists of training, validation, and test sets, each containing text samples labeled with corresponding emotion categories.

## Emotion Labels and Descriptions

The model predicts the following emotion labels:

- `0`: sadness 
- `1`: joy 
- `2`: love 
- `3`: anger 
- `4`: fear 
- `5`: surprise 

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
- 
- Optimizer: Adam with a learning rate of 5e-05, beta1=0.9, beta2=0.999, and epsilon=1e-07.
- Batch size: 64
- Number of epochs: 3

### Training results

- Accuracy:  0.9305
- F1 Score: 0.9300

## Evaluation Metrics

The model's performance was evaluated using the following metrics:
- Accuracy: The proportion of correctly predicted labels.
- F1 Score: The weighted average of precision and recall, which provides a balanced measure for multi-class classification.

### Framework versions

- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1