chansung commited on
Commit
e854cc5
1 Parent(s): 656b51b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -0
README.md CHANGED
@@ -1,3 +1,30 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text2text-generation
4
+ tags:
5
+ - llama
6
+ - llm
7
  ---
8
+
9
+ This is LoRA checkpoint fine-tuned with the following CLI. The fine-tuning process is logged in [W&B dashboard](https://wandb.ai/chansung18/alpaca_lora/runs/iguhmy31?workspace=user-chansung18). I have used DGX workstation with 8 x A100(40G).
10
+
11
+ ```console
12
+ python finetune.py \
13
+ --base_model='elinas/llama-13b-hf-transformers-4.29' \
14
+ --data_path='alpaca_data.json' \
15
+ --num_epochs=10 \
16
+ --cutoff_len=1024 \
17
+ --group_by_length \
18
+ --output_dir='./lora-alpaca-13b-elinas' \
19
+ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
20
+ --lora_r=16 \
21
+ --lora_alpha=32 \
22
+ --batch_size=1024 \
23
+ --micro_batch_size=28
24
+ ```
25
+
26
+ This LoRA checkpoint is recommended to be used with `transformers >= 4.29` which should be installed with the following command currently(4/30/2023).
27
+ ```console
28
+ pip install git+https://github.com/huggingface/transformers.git
29
+ ```
30
+