habanoz's picture
Update README.md
21e221f
---
license: apache-2.0
datasets:
- habanoz/airoboros-3.1-no-mathjson-max-1k
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model: microsoft/phi-1_5
---
phi 1.5 finetune on airoboros-3.1-no-mathjson-max-1k (a subset of airoboros-3.1) using qlora.
**train metrics**
- epoch = 3.0
- train_loss = 1.1384
- train_runtime = 5:25:54.30
- train_samples_per_second = 3.065
- train_steps_per_second = 0.191
**eval metrics**
- epoch = 3.0
- eval_loss = 0.8639
- eval_runtime = 0:00:26.59
- eval_samples_per_second = 7.596
- eval_steps_per_second = 1.918
SFT code: https://github.com/habanoz/qlora.git
command:
```bash
accelerate launch $BASE_DIR/qlora/train.py \
--model_name_or_path $BASE_MODEL \
--working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
--output_dir $BASE_DIR/$OUTPUT_NAME-peft \
--merged_output_dir $BASE_DIR/$OUTPUT_NAME \
--final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
--num_train_epochs 3 \
--logging_steps 1 \
--save_strategy steps \
--save_steps 120 \
--save_total_limit 2 \
--data_seed 11422 \
--evaluation_strategy steps \
--per_device_eval_batch_size 4 \
--eval_dataset_size 0.01 \
--eval_steps 120 \
--max_new_tokens 1024 \
--dataloader_num_workers 3 \
--logging_strategy steps \
--do_train \
--do_eval \
--lora_r 64 \
--lora_alpha 16 \
--lora_modules all \
--bits 4 \
--double_quant \
--quant_type nf4 \
--lr_scheduler_type constant \
--dataset habanoz/airoboros-3.1-no-mathjson-max-1k \
--dataset_format airoboros_chat \
--model_max_len 1024 \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 16 \
--learning_rate 1e-5 \
--adam_beta2 0.999 \
--max_grad_norm 0.3 \
--lora_dropout 0.0 \
--weight_decay 0.0 \
--seed 11422 \
--gradient_checkpointing False \
--use_flash_attention_2 \
--ddp_find_unused_parameters False \
--trust_remote_code True
```