--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral7b_instruct_generation results: [] --- # mistral7b_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.7939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7815 | 0.0 | 20 | 1.8296 | | 1.8199 | 0.01 | 40 | 1.7992 | | 1.7606 | 0.01 | 60 | 1.7950 | | 1.8347 | 0.01 | 80 | 1.7923 | | 1.7433 | 0.01 | 100 | 1.7958 | | 1.8829 | 0.02 | 120 | 1.7928 | | 1.769 | 0.02 | 140 | 1.7893 | | 1.7817 | 0.02 | 160 | 1.7849 | | 1.7975 | 0.03 | 180 | 1.7881 | | 2.008 | 0.03 | 200 | 1.7882 | | 1.827 | 0.03 | 220 | 1.7993 | | 1.8336 | 0.03 | 240 | 1.7953 | | 1.8757 | 0.04 | 260 | 1.7916 | | 1.9317 | 0.04 | 280 | 1.7900 | | 1.8708 | 0.04 | 300 | 1.7867 | | 1.8851 | 0.04 | 320 | 1.7928 | | 1.94 | 0.05 | 340 | 1.7880 | | 1.7749 | 0.05 | 360 | 1.8033 | | 1.8647 | 0.05 | 380 | 1.7870 | | 1.8468 | 0.06 | 400 | 1.7871 | | 1.8341 | 0.06 | 420 | 1.7890 | | 1.9152 | 0.06 | 440 | 1.7892 | | 1.7979 | 0.06 | 460 | 1.8051 | | 1.9065 | 0.07 | 480 | 1.7986 | | 1.8011 | 0.07 | 500 | 1.7939 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0