license: apache-2.0
tags:
- generated_from_keras_callback
- text-classification
- sentiment-analysis
base_model: distilbert-base-uncased
model-index:
- name: emotion-analysis-distilbert
results: []
metrics:
- accuracy
- f1
- confusion_matrix
library_name: transformers
emotion-analysis-distilbert
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
Model description
The model is based on the DistilBERT architecture, a distilled version of the BERT model, which is suitable for tasks requiring efficient inference without sacrificing performance. This specific model has been fine-tuned to predict emotions from text inputs.
Intended uses & limitations
This model is intended for text classification tasks, particularly sentiment analysis and emotion recognition, where input texts need to be categorized into predefined emotion categories. It can be used in various applications such as chatbots, social media sentiment analysis, and customer feedback analysis.
The model's performance may vary based on the diversity and complexity of the emotional expressions in the input data. It may not generalize well to different domains or languages without further adaptation.
Training and evaluation data
The model was trained and evaluated on the "emotion" dataset, which includes labeled examples for emotion classification. The dataset consists of training, validation, and test sets, each containing text samples labeled with corresponding emotion categories.
Training procedure
Training hyperparameters
The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
- Optimizer: Adam with a learning rate of 5e-05, beta1=0.9, beta2=0.999, and epsilon=1e-07.
- Batch size: 64
- Number of epochs: 3
Training results
- Accuracy: 0.9305
- F1 Score: 0.9300
Evaluation Metrics
The model's performance was evaluated using the following metrics:
- Accuracy: The proportion of correctly predicted labels.
- F1 Score: The weighted average of precision and recall, which provides a balanced measure for multi-class classification.
Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1