gpt2-wikitext
This model is a fine-tuned version of on a wikitext dataset.
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
loss grad_norm learning_rate epoch step 0 9.0711 1.319650 0.000068 0.319489 100 1 7.6569 1.031958 0.000036 0.638978 200 2 7.2724 0.889421 0.000004 0.958466 300
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
Developed by: Min Thein
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support