File size: 861 Bytes
8b1620c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
license: mit
datasets:
- yahma/alpaca-cleaned
duplicated_from: tloen/alpaca-lora-7b
---

This repo contains a low-rank adapter for LLaMA-7b
fit on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.

This version of the weights was trained with the following hyperparameters:

- Epochs: 10 (load from best epoch)
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: q_proj, k_proj, v_proj, o_proj

That is:

```
python finetune.py \
    --base_model='decapoda-research/llama-7b-hf' \
    --num_epochs=10 \
    --cutoff_len=512 \
    --group_by_length \
    --output_dir='./lora-alpaca-512-qkvo' \
    --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
    --lora_r=16 \
    --micro_batch_size=8
```

Instructions for running it can be found at https://github.com/tloen/alpaca-lora.