Edit model card

liewchooichin/distilbert-base-uncased-tiny-imdb

This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 2.9373
  • Validation Loss: 2.9930
  • Epoch: 2

Model description

This model is created from following the lesson in Hugging Face Learn. NLP -- Main NLP Tasks -- Fine-tuning a masked language model.

Intended uses & limitations

This is only a small scale fine-tuning of the standfordnlp/imbd datasets. Only 1000 rows of the unsupervised dataset is used for training. The exercise is carried on Google Colab - T4 gpu.

Training and evaluation data

1000 rows from the standfordnlp/imbd datasets.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -969, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: mixed_float16

Training results

Train Loss Validation Loss Epoch
3.2484 3.2338 0
3.0821 2.8758 1
2.9373 2.9930 2

Framework versions

  • Transformers 4.40.2
  • TensorFlow 2.15.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Dataset used to train liewchooichin/distilbert-base-uncased-tiny-imdb