--- license: mit base_model: microsoft/phi-2 tags: - axolotl - generated_from_trainer model-index: - name: phi2-filter2 results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.3.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true load_in_8bit: false load_in_4bit: false strict: false hub_model_id: satpalsr/phi2-filter2 hf_use_auth_token: true datasets: - path: satpalsr/translation-filter type: completion dataset_prepared_path: val_set_size: 0.01 output_dir: ./phi-sft-out2 sequence_len: 2048 sample_packing: false # currently unsupported pad_to_sequence_len: adapter: lora_model_dir: lora_r: 16 lora_alpha: 32 lora_dropout: 0.1 lora_target_linear: true lora_fan_in_fan_out: lora_modules_to_save: - embd - lm_head wandb_project: phi2transfilter wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 16 num_epochs: 4 optimizer: paged_adamw_8bit adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 1e-5 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: false warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ```

# phi2-filter2 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1944 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5676 | 0.01 | 1 | 2.5391 | | 2.4364 | 0.25 | 29 | 2.4042 | | 1.9523 | 0.5 | 58 | 1.8580 | | 1.1137 | 0.75 | 87 | 0.9535 | | 0.5107 | 1.0 | 116 | 0.4195 | | 0.4588 | 1.25 | 145 | 0.2877 | | 0.2876 | 1.5 | 174 | 0.2462 | | 0.2959 | 1.75 | 203 | 0.2264 | | 0.2197 | 2.0 | 232 | 0.2114 | | 0.3045 | 2.25 | 261 | 0.2052 | | 0.2726 | 2.5 | 290 | 0.2022 | | 0.3046 | 2.75 | 319 | 0.1975 | | 0.3316 | 3.0 | 348 | 0.1954 | | 0.2223 | 3.25 | 377 | 0.1950 | | 0.2609 | 3.5 | 406 | 0.1946 | | 0.2739 | 3.75 | 435 | 0.1945 | | 0.2703 | 4.0 | 464 | 0.1944 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0