--- library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: NousResearch/Llama-2-7b-hf model-index: - name: llama2_instruct_generation results: [] --- # llama2_instruct_generation This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.6705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9724 | 0.0 | 20 | 1.8100 | | 1.8173 | 0.01 | 40 | 1.7801 | | 1.8184 | 0.01 | 60 | 1.7671 | | 1.8725 | 0.01 | 80 | 1.7568 | | 1.8967 | 0.01 | 100 | 1.7460 | | 1.8943 | 0.02 | 120 | 1.7172 | | 1.788 | 0.02 | 140 | 1.7045 | | 1.8953 | 0.02 | 160 | 1.6986 | | 1.8262 | 0.02 | 180 | 1.6943 | | 1.8472 | 0.03 | 200 | 1.6926 | | 1.8416 | 0.03 | 220 | 1.6896 | | 1.838 | 0.03 | 240 | 1.6855 | | 1.7743 | 0.04 | 260 | 1.6806 | | 1.8562 | 0.04 | 280 | 1.6785 | | 1.8562 | 0.04 | 300 | 1.6794 | | 1.8117 | 0.04 | 320 | 1.6783 | | 1.8193 | 0.05 | 340 | 1.6768 | | 1.8807 | 0.05 | 360 | 1.6745 | | 1.7641 | 0.05 | 380 | 1.6738 | | 1.7738 | 0.05 | 400 | 1.6735 | | 1.7759 | 0.06 | 420 | 1.6733 | | 1.7089 | 0.06 | 440 | 1.6721 | | 1.7984 | 0.06 | 460 | 1.6706 | | 1.7243 | 0.07 | 480 | 1.6720 | | 1.9205 | 0.07 | 500 | 1.6705 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0