Edit model card

healthinsurance_textgen

This model is a fine-tuned version of distilbert/distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6360

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 10 2.5799
No log 2.0 20 2.3829
No log 3.0 30 2.2232
No log 4.0 40 2.1162
No log 5.0 50 2.0297
No log 6.0 60 1.9680
No log 7.0 70 1.9128
No log 8.0 80 1.8481
No log 9.0 90 1.8161
No log 10.0 100 1.7868
No log 11.0 110 1.7447
No log 12.0 120 1.7269
No log 13.0 130 1.7026
No log 14.0 140 1.6866
No log 15.0 150 1.6742
No log 16.0 160 1.6633
No log 17.0 170 1.6499
No log 18.0 180 1.6432
No log 19.0 190 1.6379
No log 20.0 200 1.6360

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.2+cpu
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
15
Safetensors
Model size
81.9M params
Tensor type
F32
·

Finetuned from