File size: 862 Bytes
93fa410
 
 
36927e0
 
 
 
 
1d5b535
4e6393d
36927e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: mit
---

Alpaca Lora adapter weight fine-tuned on following instruction dataset.

https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md

Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation

We use following parameter.

```
python finetune.py \
    --base_model 'decapoda-research/llama-7b-hf' \
    --data_path 'rewoo/planner_instruction_tuning_2k' \
    --output_dir './lora-alpaca-planner' \
    --batch_size 128 \
    --micro_batch_size 8 \
    --num_epochs 10 \
    --learning_rate 1e-4 \
    --cutoff_len 1024 \
    --val_set_size 200 \
    --lora_r 8 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --lora_target_modules '[q_proj,v_proj]' \
    --train_on_inputs \
    --group_by_length \
    --resume_from_checkpoint 'tloen/alpaca-lora-7b'
```