OllieStanley commited on
Commit
03ec016
1 Parent(s): d82939e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -21
README.md CHANGED
@@ -2,9 +2,9 @@
2
  license: other
3
  ---
4
 
5
- # OpenAssistant LLaMa-Based Models
6
 
7
- Due to the license attached to LLaMa models by Meta AI it is not possible to directly distribute LLaMa-based models. Instead we provide XOR weights for the OA models.
8
 
9
  Thanks to Mick for writing the `xor_codec.py` script which enables this process
10
 
@@ -12,9 +12,13 @@ Thanks to Mick for writing the `xor_codec.py` script which enables this process
12
 
13
  Note: This process applies to `oasst-rlhf-2-llama-30b-7k-steps` model. The same process can be applied to other models in future, but the checksums will be different..
14
 
15
- To use OpenAssistant LLaMa-Based Models, you need to have a copy of the original LLaMa model weights and add them to a `llama` subdirectory here.
16
 
17
- Ensure your LLaMa 30B checkpoint matches the correct md5sums:
 
 
 
 
18
 
19
  ```
20
  f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
@@ -24,46 +28,101 @@ ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth
24
  4babdbd05b8923226a9e9622492054b6 params.json
25
  ```
26
 
27
- These can be converted to HuggingFace Transformers-compatible weights using the script available [here](https://github.com/huggingface/transformers/blob/28f26c107b4a1c5c7e32ed4d9575622da0627a40/src/transformers/models/llama/convert_llama_weights_to_hf.py).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- **Important**: It was tested with git version transformers 4.28.0.dev0 (git hash: **28f26c107b4a1c5c7e32ed4d9575622da0627a40**). Make sure the package tokenizers 0.13.3 is installed. Use of different versions may result in broken outputs.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ```
32
- PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python python convert_llama_weights_to_hf.py --input_dir ~/llama/ --output_dir ~/llama30b_hf/ --model_size 30B
33
  ```
34
 
35
- Run `find -type f -exec md5sum "{}" + > checklist.chk` in the conversion target directory. This should produce a `checklist.chk` with exactly the following content if your files are correct:
36
 
37
  ```
38
- d0e13331c103453e9e087d59dcf05432 ./pytorch_model-00001-of-00007.bin
39
- 29aae4d31a0a4fe6906353001341d493 ./pytorch_model-00002-of-00007.bin
40
- b40838eb4e68e087b15b3d653ca1f5d7 ./pytorch_model-00003-of-00007.bin
41
- f845ecc481cb92b8a0586c2ce288b828 ./pytorch_model-00004-of-00007.bin
42
- f3b13d089840e6caf22cd6dd05b77ef0 ./pytorch_model-00005-of-00007.bin
43
- 12e0d2d7a9c00c4237b1b0143c48a05e ./pytorch_model-00007-of-00007.bin
44
- 1348f7c8bb3ee4408b69305a10bdfafb ./pytorch_model-00006-of-00007.bin
45
  aee09e21813368c49baaece120125ae3 ./generation_config.json
 
 
46
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
 
47
  598538f18fed1877b41f77de034c0c8a ./config.json
48
  fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
49
- b77e99aa2ddc3df500c2b2dc4455a6af ./pytorch_model.bin.index.json
50
  edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
51
  6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
 
52
  ```
53
 
54
- Once you have LLaMa weights in the correct format, you can apply the XOR decoding:
 
 
55
 
56
  ```
57
  python xor_codec.py oasst-rlhf-2-llama-30b-7k-steps/ oasst-rlhf-2-llama-30b-7k-steps-xor/ llama30b_hf/
58
  ```
59
 
60
- You should expect to see one warning message during execution:
61
 
62
  `Exception when processing 'added_tokens.json'`
63
 
64
- This is normal. If similar messages appear for other files, something has gone wrong.
65
 
66
- Now run `find -type f -exec md5sum "{}" + > checklist.chk` in the output directory (here `oasst-rlhf-2-llama-30b-7k-steps`). You should get a file with exactly these contents:
67
 
68
  ```
69
  d08594778f00abe70b93899628e41246 ./pytorch_model-00007-of-00007.bin
@@ -83,4 +142,10 @@ ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
83
  ed991042b2a449123824f689bb94b29e ./pytorch_model-00002-of-00007.bin
84
  ```
85
 
86
- If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers.
 
 
 
 
 
 
2
  license: other
3
  ---
4
 
5
+ # OpenAssistant LLaMA 30B RLHF 2
6
 
7
+ Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide XOR weights for the OA models.
8
 
9
  Thanks to Mick for writing the `xor_codec.py` script which enables this process
10
 
12
 
13
  Note: This process applies to `oasst-rlhf-2-llama-30b-7k-steps` model. The same process can be applied to other models in future, but the checksums will be different..
14
 
15
+ **This process is tested only on Linux (specifically Ubuntu). Some users have reported that the process does not work on Windows. We recommend using WSL if you only have a Windows machine.**
16
 
17
+ **Some users have also had problems with line endings causing JSON XORs not to work. Please ensure your JSON files have LF (not CR/LF) line endings.**
18
+
19
+ To use OpenAssistant LLaMA-Based Models, you should have a copy of the original LLaMA model weights and add them to a `llama` subdirectory here. If you cannot obtain the original LLaMA, see the note in italic below for a possible alternative.
20
+
21
+ Ensure your LLaMA 30B checkpoint matches the correct md5sums:
22
 
23
  ```
24
  f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
28
  4babdbd05b8923226a9e9622492054b6 params.json
29
  ```
30
 
31
+ *If you do not have a copy of the original LLaMA weights and cannot obtain one, you may still be able to complete this process. Some users have reported that [this model](https://huggingface.co/elinas/llama-30b-hf-transformers-4.29) can be used as a base for the XOR conversion. This will also allow you to skip to Step 7. However, we only support conversion starting from LLaMA original checkpoint and cannot provide support if you experience issues with this alternative approach.*
32
+
33
+ **Important: Follow these exact steps to convert your original LLaMA checkpoint to a HuggingFace Transformers-compatible format. If you use the wrong versions of any dependency, you risk ending up with weights which are not compatible with the XOR files.**
34
+
35
+ 1. Create a clean Python **3.10** virtual environment & activate it:
36
+
37
+ ```
38
+ python3.10 -m venv xor_venv
39
+ source xor_venv/bin/activate
40
+ ```
41
+
42
+ 2. Clone transformers repo and switch to tested version:
43
+
44
+ ```
45
+ git clone https://github.com/huggingface/transformers.git
46
+ cd transformers
47
+ git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c
48
+ pip install .
49
+ ```
50
+
51
+ 3. Install **exactly** these dependency versions:
52
 
53
+ ```
54
+ pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1
55
+ ```
56
+
57
+ 4. Check `pip freeze` output:
58
+
59
+ ```
60
+ accelerate==0.18.0
61
+ certifi==2022.12.7
62
+ charset-normalizer==3.1.0
63
+ filelock==3.12.0
64
+ huggingface-hub==0.13.4
65
+ idna==3.4
66
+ numpy==1.24.2
67
+ nvidia-cublas-cu11==11.10.3.66
68
+ nvidia-cuda-nvrtc-cu11==11.7.99
69
+ nvidia-cuda-runtime-cu11==11.7.99
70
+ nvidia-cudnn-cu11==8.5.0.96
71
+ packaging==23.1
72
+ protobuf==3.20.1
73
+ psutil==5.9.5
74
+ PyYAML==6.0
75
+ regex==2023.3.23
76
+ requests==2.28.2
77
+ sentencepiece==0.1.98
78
+ tokenizers==0.13.3
79
+ torch==1.13.1
80
+ tqdm==4.65.0
81
+ transformers @ file:///mnt/data/koepf/transformers
82
+ typing_extensions==4.5.0
83
+ urllib3==1.26.15
84
+ ```
85
+
86
+ 5. While in `transformers` repo root, run HF LLaMA conversion script:
87
 
88
  ```
89
+ python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir <input_path_llama_base> --output_dir <output_path_llama30b_hf> --model_size 30B
90
  ```
91
 
92
+ 6. Run `find . -type f -exec md5sum "{}" +` in the conversion target directory (`output_dir`). This should produce exactly the following checksums if your files are correct:
93
 
94
  ```
95
+ 462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin
96
+ e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin
97
+ 9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin
 
 
 
 
98
  aee09e21813368c49baaece120125ae3 ./generation_config.json
99
+ 92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin
100
+ 3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin
101
  eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
102
+ 99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin
103
  598538f18fed1877b41f77de034c0c8a ./config.json
104
  fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
105
+ fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json
106
  edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
107
  6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
108
+ 5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin
109
  ```
110
 
111
+ **Important: You should now have the correct LLaMA weights and be ready to apply the XORs. If the checksums above do not match yours, there is a problem.**
112
+
113
+ 7. Once you have LLaMA weights in the correct format, you can apply the XOR decoding:
114
 
115
  ```
116
  python xor_codec.py oasst-rlhf-2-llama-30b-7k-steps/ oasst-rlhf-2-llama-30b-7k-steps-xor/ llama30b_hf/
117
  ```
118
 
119
+ You should **expect to see one warning message** during execution:
120
 
121
  `Exception when processing 'added_tokens.json'`
122
 
123
+ This is normal. **If similar messages appear for other files, something has gone wrong**.
124
 
125
+ 8. Now run `find . -type f -exec md5sum "{}" +` in the output directory (here `oasst-rlhf-2-llama-30b-7k-steps`). You should get a file with exactly these checksums:
126
 
127
  ```
128
  d08594778f00abe70b93899628e41246 ./pytorch_model-00007-of-00007.bin
142
  ed991042b2a449123824f689bb94b29e ./pytorch_model-00002-of-00007.bin
143
  ```
144
 
145
+ If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers. **If your checksums do not match those above, there is a problem.**
146
+
147
+ - **OASST dataset paper:** https://arxiv.org/abs/2304.07327
148
+
149
+ ---
150
+ license: other
151
+ ---