--- language: - ar library_name: peft tags: - generated_from_trainer datasets: - dalyaa/darebah6700 base_model: microsoft/phi-2 model-index: - name: phi-2 results: [] --- # phi-2 This model is a fine-tuned version of [microsoftl](https://huggingface.co/microsoftl) on the dalyaa/darebah6700 dataset. It achieves the following results on the evaluation set: - Loss: 0.8023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2748 | 0.15 | 100 | 1.1349 | | 1.1095 | 0.29 | 200 | 1.0077 | | 1.0138 | 0.44 | 300 | 0.9587 | | 0.9506 | 0.58 | 400 | 0.9188 | | 0.9047 | 0.73 | 500 | 0.8906 | | 0.9017 | 0.87 | 600 | 0.8722 | | 0.8872 | 1.02 | 700 | 0.8660 | | 0.8744 | 1.16 | 800 | 0.8501 | | 0.8221 | 1.31 | 900 | 0.8472 | | 0.8356 | 1.45 | 1000 | 0.8380 | | 0.8335 | 1.6 | 1100 | 0.8317 | | 0.828 | 1.75 | 1200 | 0.8273 | | 0.8307 | 1.89 | 1300 | 0.8231 | | 0.8147 | 2.04 | 1400 | 0.8185 | | 0.8012 | 2.18 | 1500 | 0.8139 | | 0.7885 | 2.33 | 1600 | 0.8129 | | 0.7831 | 2.47 | 1700 | 0.8102 | | 0.7787 | 2.62 | 1800 | 0.8148 | | 0.7921 | 2.76 | 1900 | 0.8083 | | 0.7777 | 2.91 | 2000 | 0.8073 | | 0.7766 | 3.05 | 2100 | 0.8045 | | 0.7658 | 3.2 | 2200 | 0.8056 | | 0.7651 | 3.35 | 2300 | 0.8044 | | 0.7844 | 3.49 | 2400 | 0.8023 | | 0.7876 | 3.64 | 2500 | 0.8023 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1