File size: 2,030 Bytes
4976b91
 
 
 
 
 
21e221f
4976b91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: apache-2.0
datasets:
- habanoz/airoboros-3.1-no-mathjson-max-1k
language:
- en
library_name: transformers
pipeline_tag: text-generation
base_model: microsoft/phi-1_5
---

phi 1.5 finetune on airoboros-3.1-no-mathjson-max-1k (a subset of airoboros-3.1) using qlora.

**train metrics**
- epoch                    =        3.0
- train_loss               =     1.1384
- train_runtime            = 5:25:54.30
- train_samples_per_second =      3.065
- train_steps_per_second   =      0.191

**eval metrics**
- epoch                   =        3.0
- eval_loss               =     0.8639
- eval_runtime            = 0:00:26.59
- eval_samples_per_second =      7.596
- eval_steps_per_second   =      1.918


SFT code: https://github.com/habanoz/qlora.git

command:
```bash
accelerate launch $BASE_DIR/qlora/train.py \
  --model_name_or_path $BASE_MODEL \
  --working_dir $BASE_DIR/$OUTPUT_NAME-checkpoints \
  --output_dir $BASE_DIR/$OUTPUT_NAME-peft \
  --merged_output_dir $BASE_DIR/$OUTPUT_NAME \
  --final_output_dir $BASE_DIR/$OUTPUT_NAME-final \
  --num_train_epochs 3 \
  --logging_steps 1 \
  --save_strategy steps \
  --save_steps 120 \
  --save_total_limit 2 \
  --data_seed 11422 \
  --evaluation_strategy steps \
  --per_device_eval_batch_size 4 \
  --eval_dataset_size 0.01 \
  --eval_steps 120 \
  --max_new_tokens 1024 \
  --dataloader_num_workers 3 \
  --logging_strategy steps \
  --do_train \
  --do_eval \
  --lora_r 64 \
  --lora_alpha 16 \
  --lora_modules all \
  --bits 4 \
  --double_quant \
  --quant_type nf4 \
  --lr_scheduler_type constant \
  --dataset habanoz/airoboros-3.1-no-mathjson-max-1k \
  --dataset_format airoboros_chat \
  --model_max_len 1024 \
  --per_device_train_batch_size 1 \
  --gradient_accumulation_steps 16 \
  --learning_rate 1e-5 \
  --adam_beta2 0.999 \
  --max_grad_norm 0.3 \
  --lora_dropout 0.0 \
  --weight_decay 0.0 \
  --seed 11422 \
  --gradient_checkpointing False \
  --use_flash_attention_2 \
  --ddp_find_unused_parameters False \
  --trust_remote_code True
```