gpt2-finetuned-v4-seinfeld
This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.6941
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
3.1734 | 0.79 | 8 | 2.9417 |
3.1576 | 1.59 | 16 | 2.9239 |
3.1234 | 2.4 | 24 | 2.8920 |
3.0712 | 3.2 | 32 | 2.8634 |
2.9738 | 3.99 | 40 | 2.8342 |
2.9761 | 4.79 | 48 | 2.8069 |
2.9294 | 5.59 | 56 | 2.7844 |
2.9026 | 6.4 | 64 | 2.7665 |
2.8501 | 7.2 | 72 | 2.7544 |
2.7805 | 7.99 | 80 | 2.7398 |
2.7905 | 8.79 | 88 | 2.7293 |
2.7661 | 9.59 | 96 | 2.7204 |
2.7272 | 10.4 | 104 | 2.7131 |
2.7092 | 11.2 | 112 | 2.7056 |
2.6392 | 11.99 | 120 | 2.7010 |
2.6468 | 12.79 | 128 | 2.6961 |
2.6269 | 13.59 | 136 | 2.6899 |
2.5952 | 14.4 | 144 | 2.6874 |
Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support