lxuechen commited on
Commit
899b904
1 Parent(s): 12b6fd7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -1,3 +1,25 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+
5
+ ### Stanford Alpaca-7B
6
+
7
+ This repo hosts the weight diff for Stanford Alpaca-7B that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights.
8
+
9
+ To recover the original Alpaca-7B weights, follow these steps:
10
+ ```text
11
+ 1. Convert Meta's released weights into huggingface format. Follow this guide:
12
+ https://huggingface.co/docs/transformers/main/model_doc/llama
13
+ 2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
14
+ https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
15
+ 3. Run this function with the correct paths. E.g.,
16
+ python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>
17
+ ```
18
+
19
+ Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following
20
+
21
+ ```python
22
+ import transformers
23
+ alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>")
24
+ alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>")
25
+ ```