--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.3 model-index: - name: outputs/axolotl-qlora-out-line results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: mistralai/Mistral-7B-v0.3 model_type: MistralForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: true load_in_4bit: false strict: false datasets: - path: vdaita/editpackft_inst_line type: oasst dataset_prepared_path: val_set_size: 0.05 output_dir: ./outputs/axolotl-qlora-out-line adapter: lora lora_model_dir: sequence_len: 2048 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_project: huggingface wandb_log_model: axolotl-qlora-line-mistral gradient_accumulation_steps: 4 micro_batch_size: 2 num_epochs: 4 optimizer: paged_adamw_32bit lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true logging_steps: 1 flash_attention: true warmup_steps: 10 evals_per_epoch: 4 saves_per_epoch: 1 ```

# outputs/axolotl-qlora-out-line This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.847 | 0.01 | 1 | 0.9975 | | 0.3533 | 0.26 | 20 | 0.2772 | | 0.299 | 0.52 | 40 | 0.2501 | | 0.2288 | 0.77 | 60 | 0.2439 | | 0.334 | 1.01 | 80 | 0.2394 | | 0.3017 | 1.27 | 100 | 0.2399 | | 0.2394 | 1.53 | 120 | 0.2416 | | 0.2261 | 1.78 | 140 | 0.2400 | | 0.177 | 2.02 | 160 | 0.2388 | | 0.1911 | 2.28 | 180 | 0.2557 | | 0.1884 | 2.54 | 200 | 0.2601 | | 0.1516 | 2.79 | 220 | 0.2627 | | 0.1545 | 3.03 | 240 | 0.2628 | | 0.092 | 3.29 | 260 | 0.2915 | | 0.1251 | 3.55 | 280 | 0.2892 | | 0.109 | 3.8 | 300 | 0.2883 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0