--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-sft-out results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-1_5 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: true strict: false datasets: - path: garage-bAInd/Open-Platypus type: alpaca dataset_prepared_path: val_set_size: 0.05 output_dir: ./phi-sft-out sequence_len: 2048 sample_packing: true pad_to_sequence_len: true adapter: qlora lora_model_dir: lora_r: 64 lora_alpha: 32 lora_dropout: 0.05 lora_target_linear: true lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 4 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ```

# phi-sft-out This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2548 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0668 | 0.0 | 1 | 1.2826 | | 0.9408 | 0.25 | 580 | 1.2613 | | 1.2121 | 0.5 | 1160 | 1.2559 | | 0.9644 | 0.75 | 1740 | 1.2562 | | 0.9582 | 1.0 | 2320 | 1.2556 | | 1.0009 | 1.23 | 2900 | 1.2559 | | 0.7816 | 1.48 | 3480 | 1.2556 | | 0.9843 | 1.73 | 4060 | 1.2552 | | 0.8877 | 1.98 | 4640 | 1.2559 | | 0.8498 | 2.21 | 5220 | 1.2554 | | 0.9163 | 2.46 | 5800 | 1.2550 | | 1.0539 | 2.71 | 6380 | 1.2545 | | 0.9533 | 2.96 | 6960 | 1.2547 | | 0.6969 | 3.19 | 7540 | 1.2547 | | 0.6204 | 3.44 | 8120 | 1.2547 | | 0.891 | 3.69 | 8700 | 1.2548 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.17.0 - Tokenizers 0.15.0