--- license: mit base_model: microsoft/phi-2 tags: - axolotl - generated_from_trainer model-index: - name: evol-codealpaca-pairwise-sharegpt-test results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.3.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: true hub_model_id: AlekseyKorshuk/evol-codealpaca-pairwise-sharegpt-test load_in_8bit: false load_in_4bit: false strict: false datasets: - path: AlekseyKorshuk/evol-codealpaca-pairwise-sharegpt type: sharegpt conversation: chatml dataset_prepared_path: val_set_size: 0.001 output_dir: ./output sequence_len: 2048 sample_packing: false # currently unsupported pad_to_sequence_len: lora_r: lora_alpha: lora_dropout: lora_target_modules: lora_target_linear: lora_fan_in_fan_out: wandb_project: ui-thesis wandb_entity: wandb_watch: wandb_name: phi-2-chatml-test wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 16 num_epochs: 3 optimizer: paged_adamw_8bit adam_beta1: 0.9 adam_beta2: 0.95 adam_epsilon: 0.00001 #max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 2e-5 warmup_steps: 4 weight_decay: 0.01 train_on_inputs: false group_by_length: false bf16: false fp16: false tf32: false float16: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true evals_per_epoch: 1 eval_table_size: 8 # Approximate number of predictions sent to wandb depending on batch size. Enabled above 0. Default is 0 eval_table_max_new_tokens: 768 # Total number of tokens generated for predictions sent to wandb. Default is 128 saves_per_epoch: 1 save_total_limit: 1 debug: deepspeed: fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: eos_token: "<|im_end|>" pad_token: "<|endoftext|>" tokens: - "<|im_start|>" ```

# evol-codealpaca-pairwise-sharegpt-test This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0121 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 4 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0571 | 0.01 | 1 | 1.3648 | | 0.8044 | 1.0 | 82 | 1.0212 | | 0.7486 | 2.0 | 164 | 1.0126 | | 0.7745 | 3.0 | 246 | 1.0121 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0