Edit model card

distilgpt2-finetuned-custom-mail

This model is a fine-tuned version of distilgpt2 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1905

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 7 3.5915
No log 2.0 14 3.4986
No log 3.0 21 3.4418
No log 4.0 28 3.3970
No log 5.0 35 3.3569
No log 6.0 42 3.3207
No log 7.0 49 3.2972
No log 8.0 56 3.2806
No log 9.0 63 3.2620
No log 10.0 70 3.2451
No log 11.0 77 3.2302
No log 12.0 84 3.2177
No log 13.0 91 3.2083
No log 14.0 98 3.2024
No log 15.0 105 3.1984
No log 16.0 112 3.1962
No log 17.0 119 3.1938
No log 18.0 126 3.1920
No log 19.0 133 3.1913
No log 20.0 140 3.1905

Framework versions

  • Transformers 4.23.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
12