Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Standford Alpaca
|
2 |
+
|
3 |
+
FINETUNED USING THE ORIGINAL REPOSITORY: https://github.com/tatsu-lab/stanford_alpaca
|
4 |
+
NO LORA HAS BEEN USED
|
5 |
+
|
6 |
+
status of training can be viewed at: https://wandb.ai/peruano/huggingface/runs/ei57qbzm
|
7 |
+
|
8 |
+
CONFIGURATION (default):
|
9 |
+
|
10 |
+
```shell
|
11 |
+
torchrun --nproc_per_node=4 --master_port=3045 train.py \
|
12 |
+
--model_name_or_path /workspace/llama-7b-hf \
|
13 |
+
--data_path ./alpaca_data.json \
|
14 |
+
--bf16 True \
|
15 |
+
--output_dir /workspace/output \
|
16 |
+
--num_train_epochs 3 \
|
17 |
+
--per_device_train_batch_size 4 \
|
18 |
+
--per_device_eval_batch_size 4 \
|
19 |
+
--gradient_accumulation_steps 8 \
|
20 |
+
--evaluation_strategy "no" \
|
21 |
+
--save_strategy "steps" \
|
22 |
+
--save_steps 200 \
|
23 |
+
--save_total_limit 1 \
|
24 |
+
--learning_rate 2e-5 \
|
25 |
+
--weight_decay 0. \
|
26 |
+
--warmup_ratio 0.03 \
|
27 |
+
--lr_scheduler_type "cosine" \
|
28 |
+
--logging_steps 1 \
|
29 |
+
--fsdp "full_shard auto_wrap" \
|
30 |
+
--fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \
|
31 |
+
--tf32 True --report_to="wandb"
|
32 |
+
```
|