Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
|
5 |
+
Alpaca Lora adapter weight fine-tuned on following instruction dataset.
|
6 |
+
|
7 |
+
https://huggingface.co/datasets/rewoo/planner_instruction_tuning_2k/blob/main/README.md
|
8 |
+
|
9 |
+
We use following parameter.
|
10 |
+
|
11 |
+
```
|
12 |
+
python finetune.py \
|
13 |
+
--base_model 'decapoda-research/llama-7b-hf' \
|
14 |
+
--data_path 'rewoo/planner_instruction_tuning_2k' \
|
15 |
+
--output_dir './lora-alpaca-planner' \
|
16 |
+
--batch_size 128 \
|
17 |
+
--micro_batch_size 8 \
|
18 |
+
--num_epochs 10 \
|
19 |
+
--learning_rate 1e-4 \
|
20 |
+
--cutoff_len 1024 \
|
21 |
+
--val_set_size 200 \
|
22 |
+
--lora_r 8 \
|
23 |
+
--lora_alpha 16 \
|
24 |
+
--lora_dropout 0.05 \
|
25 |
+
--lora_target_modules '[q_proj,v_proj]' \
|
26 |
+
--train_on_inputs \
|
27 |
+
--group_by_length \
|
28 |
+
--resume_from_checkpoint 'tloen/alpaca-lora-7b'
|
29 |
+
```
|