Edit model card

gpt2-xl_ft_mult_10k

This model is a fine-tuned version of gpt2-xl on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6916

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 32
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100.0
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 0.99 54 1.3358
No log 1.99 108 0.7486
No log 2.99 162 0.6997
No log 3.99 216 0.6916

Framework versions

  • Transformers 4.17.0
  • Pytorch 1.10.0+cu111
  • Datasets 2.0.0
  • Tokenizers 0.11.6

Perplexity

Score: 25.89222526550293

Dataset Size

Size: 5000

Downloads last month
2