--- license: mit tags: - generated_from_trainer base_model: gpt2 model-index: - name: ms-32maps_nonalpha-ds results: [] --- # ms-32maps_nonalpha-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.9255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.2444 | 0.04 | 100 | 6.1360 | | 6.1152 | 0.08 | 200 | 6.1151 | | 6.1139 | 0.13 | 300 | 6.0946 | | 6.0576 | 0.17 | 400 | 6.0382 | | 6.0133 | 0.21 | 500 | 6.0101 | | 6.0049 | 0.25 | 600 | 5.9972 | | 5.9919 | 0.3 | 700 | 5.9890 | | 5.9839 | 0.34 | 800 | 5.9841 | | 5.9875 | 0.38 | 900 | 5.9783 | | 5.9712 | 0.42 | 1000 | 5.9717 | | 5.9775 | 0.47 | 1100 | 5.9656 | | 5.9668 | 0.51 | 1200 | 5.9597 | | 5.955 | 0.55 | 1300 | 5.9553 | | 5.9445 | 0.59 | 1400 | 5.9495 | | 5.95 | 0.64 | 1500 | 5.9439 | | 5.939 | 0.68 | 1600 | 5.9416 | | 5.9345 | 0.72 | 1700 | 5.9378 | | 5.9396 | 0.76 | 1800 | 5.9336 | | 5.9414 | 0.8 | 1900 | 5.9303 | | 5.9262 | 0.85 | 2000 | 5.9283 | | 5.9365 | 0.89 | 2100 | 5.9266 | | 5.937 | 0.93 | 2200 | 5.9258 | | 5.9251 | 0.97 | 2300 | 5.9255 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.13.1 - Tokenizers 0.13.3