--- language: - zh license: apache-2.0 library_name: peft tags: - trl - sft - nycu-112-2-deeplearning-hw2 - generated_from_trainer base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0 datasets: - DandinPower/ZH-Reading-Comprehension-Breeze-Instruct model-index: - name: breeze_7b_lora results: [] --- # breeze_7b_lora This model is a fine-tuned version of [MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0) on the DandinPower/ZH-Reading-Comprehension-Breeze-Instruct dataset. It achieves the following results on the evaluation set: - Loss: 0.9671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - total_eval_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 700 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.2919 | 0.3690 | 250 | 2.2932 | | 2.2105 | 0.7380 | 500 | 2.1866 | | 1.9287 | 1.1070 | 750 | 1.9796 | | 1.8181 | 1.4760 | 1000 | 1.8416 | | 1.6765 | 1.8450 | 1250 | 1.7156 | | 1.4271 | 2.2140 | 1500 | 1.6054 | | 1.3595 | 2.5830 | 1750 | 1.5071 | | 1.2794 | 2.9520 | 2000 | 1.4263 | | 1.0636 | 3.3210 | 2250 | 1.3707 | | 1.0272 | 3.6900 | 2500 | 1.3044 | | 0.8977 | 4.0590 | 2750 | 1.2597 | | 0.8923 | 4.4280 | 3000 | 1.2184 | | 0.8628 | 4.7970 | 3250 | 1.1737 | | 0.6994 | 5.1661 | 3500 | 1.1514 | | 0.7201 | 5.5351 | 3750 | 1.1209 | | 0.7237 | 5.9041 | 4000 | 1.0931 | | 0.6468 | 6.2731 | 4250 | 1.0740 | | 0.6052 | 6.6421 | 4500 | 1.0472 | | 0.5737 | 7.0111 | 4750 | 1.0360 | | 0.5419 | 7.3801 | 5000 | 1.0246 | | 0.5539 | 7.7491 | 5250 | 1.0027 | | 0.4615 | 8.1181 | 5500 | 0.9947 | | 0.4782 | 8.4871 | 5750 | 0.9851 | | 0.4809 | 8.8561 | 6000 | 0.9699 | | 0.4284 | 9.2251 | 6250 | 0.9738 | | 0.4332 | 9.5941 | 6500 | 0.9696 | | 0.4341 | 9.9631 | 6750 | 0.9671 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1