gpt2-evy / README.md
joshcarp's picture
Upload tokenizer
c95869e verified
|
raw
history blame
No virus
2.19 kB
metadata
license: mit
tags:
  - generated_from_trainer
base_model: gpt2
model-index:
  - name: gpt2-evy
    results: []

gpt2-evy

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2693

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 7 1.5530
No log 2.0 14 1.4601
No log 3.0 21 1.3953
No log 4.0 28 1.3557
No log 5.0 35 1.3301
No log 6.0 42 1.3117
No log 7.0 49 1.2968
No log 8.0 56 1.2832
No log 9.0 63 1.2769
No log 10.0 70 1.2807
No log 11.0 77 1.2698
No log 12.0 84 1.2707
No log 13.0 91 1.2747
No log 14.0 98 1.2695
1.0557 15.0 105 1.2699
1.0557 16.0 112 1.2656
1.0557 17.0 119 1.2711
1.0557 18.0 126 1.2696
1.0557 19.0 133 1.2691
1.0557 20.0 140 1.2693

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1