GPTL-APPS
This model is a fine-tuned version of gpt2-large on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7539
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0941 | 0.04 | 200 | 0.9988 |
0.8014 | 0.08 | 400 | 0.9315 |
0.8452 | 0.12 | 600 | 0.8909 |
0.9507 | 0.16 | 800 | 0.8903 |
0.6988 | 0.2 | 1000 | 0.8632 |
0.6965 | 0.24 | 1200 | 0.8553 |
0.7256 | 0.28 | 1400 | 0.8222 |
0.7109 | 0.32 | 1600 | 0.8162 |
0.6418 | 0.36 | 1800 | 0.8086 |
0.649 | 0.4 | 2000 | 0.8051 |
0.7378 | 0.44 | 2200 | 0.7974 |
0.7202 | 0.48 | 2400 | 0.7933 |
0.6896 | 0.52 | 2600 | 0.7817 |
0.5561 | 0.56 | 2800 | 0.7945 |
0.6497 | 0.6 | 3000 | 0.7774 |
0.735 | 0.64 | 3200 | 0.7758 |
0.5507 | 0.68 | 3400 | 0.7741 |
0.5615 | 0.72 | 3600 | 0.7677 |
0.6098 | 0.76 | 3800 | 0.7605 |
0.6038 | 0.8 | 4000 | 0.7653 |
0.5356 | 0.84 | 4200 | 0.7562 |
0.5699 | 0.88 | 4400 | 0.7586 |
0.6348 | 0.92 | 4600 | 0.7547 |
0.6458 | 0.96 | 4800 | 0.7539 |
0.6236 | 1.0 | 5000 | 0.7539 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for AdnanRiaz107/GPTL-APPS
Base model
openai-community/gpt2-large