Edit model card

robert-base-emotion

Model description:

roberta is Bert with better hyperparameter choices so they said it's Robustly optimized Bert during pretraining.

roberta-base finetuned on the emotion dataset using HuggingFace Trainer with below Hyperparameters

 learning rate 2e-5, 
 batch size 64,
 num_train_epochs=8,

Model Performance Comparision on Emotion Dataset from Twitter:

Model Accuracy F1 Score Test Sample per Second
Distilbert-base-uncased-emotion 93.8 93.79 398.69
Bert-base-uncased-emotion 94.05 94.06 190.152
Roberta-base-emotion 93.95 93.97 195.639
Albert-base-v2-emotion 93.6 93.65 182.794

How to Use the model:

from transformers import pipeline
classifier = pipeline("text-classification",model='bhadresh-savani/roberta-base-emotion', return_all_scores=True)
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use", )
print(prediction)

"""
Output:
[[
{'label': 'sadness', 'score': 0.002281982684507966}, 
{'label': 'joy', 'score': 0.9726489186286926}, 
{'label': 'love', 'score': 0.021365027874708176}, 
{'label': 'anger', 'score': 0.0026395076420158148}, 
{'label': 'fear', 'score': 0.0007162453257478774}, 
{'label': 'surprise', 'score': 0.0003483477921690792}
]]
"""

Dataset:

Twitter-Sentiment-Analysis.

Training procedure

Colab Notebook follow the above notebook by changing the model name to roberta

Eval results

{
 'test_accuracy': 0.9395,
 'test_f1': 0.9397328860104454,
 'test_loss': 0.14367154240608215,
 'test_runtime': 10.2229,
 'test_samples_per_second': 195.639,
 'test_steps_per_second': 3.13
 }

Reference:

Downloads last month
21,033
Safetensors
Model size
125M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train bhadresh-savani/roberta-base-emotion

Space using bhadresh-savani/roberta-base-emotion 1

Evaluation results