metadata
license: mit
base_model: gpt2
tags:
- generated_from_trainer
datasets:
- proto_qa
model-index:
- name: my_awesome_generation_model
results: []
my_awesome_generation_model
This model is a fine-tuned version of gpt2 on the proto_qa dataset. It achieves the following results on the evaluation set:
- Loss: 2.4234
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.6643 | 0.32 | 20 | 2.8379 |
2.942 | 0.65 | 40 | 2.6090 |
2.7742 | 0.97 | 60 | 2.5426 |
2.6329 | 1.29 | 80 | 2.5053 |
2.5239 | 1.61 | 100 | 2.4791 |
2.5101 | 1.94 | 120 | 2.4521 |
2.4416 | 2.26 | 140 | 2.4436 |
2.3676 | 2.58 | 160 | 2.4405 |
2.3664 | 2.9 | 180 | 2.4280 |
2.2977 | 3.23 | 200 | 2.4291 |
2.2861 | 3.55 | 220 | 2.4216 |
2.2695 | 3.87 | 240 | 2.4213 |
2.1973 | 4.19 | 260 | 2.4208 |
2.1874 | 4.52 | 280 | 2.4216 |
2.2308 | 4.84 | 300 | 2.4229 |
2.18 | 5.16 | 320 | 2.4203 |
2.1711 | 5.48 | 340 | 2.4222 |
2.1402 | 5.81 | 360 | 2.4208 |
2.1064 | 6.13 | 380 | 2.4222 |
2.1189 | 6.45 | 400 | 2.4224 |
2.0666 | 6.77 | 420 | 2.4228 |
2.1272 | 7.1 | 440 | 2.4226 |
2.0448 | 7.42 | 460 | 2.4226 |
2.123 | 7.74 | 480 | 2.4234 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2