Edit model card

chess-gpt2-ft_v0.1

This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2815

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.0868 0.0885 100 1.7682
1.7195 0.1769 200 1.5783
1.5913 0.2654 300 1.4892
1.5263 0.3538 400 1.4391
1.4804 0.4423 500 1.3965
1.4476 0.5307 600 1.3645
1.4166 0.6192 700 1.3441
1.3967 0.7077 800 1.3226
1.3782 0.7961 900 1.3053
1.3627 0.8846 1000 1.2919
1.3523 0.9730 1100 1.2830

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
124M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for dakwi/chess-gpt2-ft_v0.1

Finetuned
this model