--- license: mit base_model: gpt2-large tags: - generated_from_trainer datasets: - customized model-index: - name: gpt2-large-lora-sft results: [] --- # gpt2-large-lora-sft This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the customized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00013 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 6 - total_train_batch_size: 6 - total_eval_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.5 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.0.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.3 # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__gpt2-large-lora-sft) | Metric | Value | |-----------------------|---------------------------| | Avg. | 28.05 | | ARC (25-shot) | 26.79 | | HellaSwag (10-shot) | 44.15 | | MMLU (5-shot) | 25.82 | | TruthfulQA (0-shot) | 39.06 | | Winogrande (5-shot) | 55.09 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.46 |