sentiment_fine_tune_bert
This model is a fine-tuned version of distilbert-base-uncased on a spam classification dataset. It achieves the following results on the evaluation set: {'eval_loss': 0.017569826330457415}
Intended uses & limitations
The model can be used for classifing whether the text is spam or not.
Training procedure
Trained using TFTrainer
Training hyperparameters
num_train_epochs = 2,
per_device_train_batch_size = 8,
per_device_eval_batch_size = 16,
eval_steps=100,
warmup_steps = 500,
weight_decay = 0.01,
logging_steps = 10,
Training results
Confusion matrix - [[955, 0], [ 0, 160]]
precision recall f1-score support
0 1.00 1.00 1.00 955
1 1.00 1.00 1.00 160
accuracy 1.00 1115
macro avg 1.00 1.00 1.00 1115
weighted avg 1.00 1.00 1.00 1115
Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for sweetpablo/sentiment_fine_tune_bert
Base model
distilbert/distilbert-base-uncased