File size: 1,205 Bytes
5a971b9
b51a819
5a971b9
899b904
 
 
95c7d4b
899b904
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b51a819
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: cc-by-nc-4.0
---

### Stanford Alpaca-7B

This repo hosts the weight diff for [Stanford Alpaca-7B](https://github.com/tatsu-lab/stanford_alpaca/) that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights. 

To recover the original Alpaca-7B weights, follow these steps:
```text
1. Convert Meta's released weights into huggingface format. Follow this guide:
    https://huggingface.co/docs/transformers/main/model_doc/llama
2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
    https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
3. Run this function with the correct paths. E.g.,
    python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
```

Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following

```python
import transformers
alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>")
alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>")
```