--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-7b-autextification2024 results: [] --- # mistral-7b-autextification2024 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4251 | 0.0 | 10 | 1.7924 | | 1.3175 | 0.01 | 20 | 1.7542 | | 1.7841 | 0.01 | 30 | 1.7322 | | 2.0421 | 0.01 | 40 | 1.7294 | | 2.669 | 0.02 | 50 | 1.7471 | | 1.314 | 0.02 | 60 | 1.7153 | | 1.4678 | 0.02 | 70 | 1.6989 | | 1.7679 | 0.03 | 80 | 1.6928 | | 2.0057 | 0.03 | 90 | 1.7002 | | 2.5086 | 0.03 | 100 | 1.7053 | | 1.3326 | 0.04 | 110 | 1.6931 | | 1.3984 | 0.04 | 120 | 1.6823 | | 1.8045 | 0.04 | 130 | 1.6807 | | 1.8764 | 0.05 | 140 | 1.6812 | | 2.5524 | 0.05 | 150 | 1.6825 | | 1.2854 | 0.05 | 160 | 1.6766 | | 1.3712 | 0.06 | 170 | 1.6709 | | 1.8211 | 0.06 | 180 | 1.6660 | | 2.0365 | 0.06 | 190 | 1.6778 | | 2.4664 | 0.07 | 200 | 1.6938 | | 1.3405 | 0.07 | 210 | 1.6712 | | 1.3856 | 0.07 | 220 | 1.6666 | | 1.5553 | 0.08 | 230 | 1.6586 | | 1.8616 | 0.08 | 240 | 1.6613 | | 2.4064 | 0.09 | 250 | 1.6666 | | 1.3446 | 0.09 | 260 | 1.6681 | | 1.386 | 0.09 | 270 | 1.6645 | | 1.6508 | 0.1 | 280 | 1.6582 | | 1.8588 | 0.1 | 290 | 1.6600 | | 2.3148 | 0.1 | 300 | 1.6524 | | 1.2785 | 0.11 | 310 | 1.6549 | | 1.2727 | 0.11 | 320 | 1.6517 | | 1.5971 | 0.11 | 330 | 1.6486 | | 1.7811 | 0.12 | 340 | 1.6540 | | 2.3368 | 0.12 | 350 | 1.6596 | | 1.2513 | 0.12 | 360 | 1.6578 | | 1.4403 | 0.13 | 370 | 1.6429 | | 1.8051 | 0.13 | 380 | 1.6462 | | 1.8214 | 0.13 | 390 | 1.6469 | | 2.4691 | 0.14 | 400 | 1.6654 | | 1.2895 | 0.14 | 410 | 1.6543 | | 1.3192 | 0.14 | 420 | 1.6435 | | 1.7031 | 0.15 | 430 | 1.6438 | | 1.8647 | 0.15 | 440 | 1.6402 | | 2.398 | 0.15 | 450 | 1.6444 | | 1.3195 | 0.16 | 460 | 1.6445 | | 1.4008 | 0.16 | 470 | 1.6407 | | 1.6925 | 0.16 | 480 | 1.6380 | | 1.8432 | 0.17 | 490 | 1.6396 | | 2.5103 | 0.17 | 500 | 1.6422 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2