distilbert-sentiment

This model is a fine-tuned version of distilbert-base-uncased on a subset of the amazon-polarity dataset.

[Update 10/10/23] The model has been retrained on a larger part of the dataset with an improvement on the loss, f1 score and accuracy. It achieves the following results on the evaluation set:

  • Loss: 0.116
  • Accuracy: 0.961
  • F1_score: 0.960

Model description

This sentiment classifier has been trained on 360_000 samples for the training set, 40_000 samples for the validation set and 40_000 samples for the test set.

Intended uses & limitations

from transformers import pipeline

# Create the pipeline
sentiment_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon')

# Now you can use the pipeline to get the sentiment
result = sentiment_classifier("This product doesn't fit me at all.")
print(result)
#[{'label': 'negative', 'score': 0.9994848966598511}]

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 1270
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 150
  • num_epochs: 2
  • weight_decay: 0.01

Training results

(Previous results before retraining from the model evaluator)

key value
eval_accuracy 0.94112
eval_auc 0.9849
eval_f1_score 0.9417
eval_precision 0.9321
eval_recall 0.95149

Framework versions

  • Transformers 4.34.0
  • Pytorch lightning 2.0.9
  • Tokenizers 0.14.0

If you want to support me, you can here.

Downloads last month
75
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon

Quantized
(23)
this model
Finetunes
1 model

Dataset used to train AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon

Collection including AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon

Evaluation results