TheBloke commited on
Commit
a878362
1 Parent(s): 5b14024

Upload new k-quant GGML quantised models.

Browse files
Files changed (1) hide show
  1. README.md +204 -36
README.md CHANGED
@@ -1,7 +1,8 @@
1
  ---
2
- license: other
3
  inference: false
 
4
  ---
 
5
  <!-- header start -->
6
  <div style="width: 100%;">
7
  <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
@@ -16,67 +17,87 @@ inference: false
16
  </div>
17
  <!-- header end -->
18
 
19
- # OpenAssistant LLaMA 30B SFT 7 GGML
20
 
21
- This is a repo of GGML format models for [OpenAssistant's LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
22
 
23
- It is the result of merging the XORs from the above repo with the original Llama 30B weights, and then quantising to 4bit and 5bit GGML for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
24
-
25
- This is epoch 7 of OpenAssistant's training of their Llama 30B model.
 
 
 
26
 
27
  ## Repositories available
28
 
29
- * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ).
30
- * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML).
31
- * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF).
32
 
33
- ## PROMPT TEMPLATE
 
34
 
35
- This model requires the following prompt template:
36
 
37
- ```
38
- <|prompter|> prompt goes here
39
- <|assistant|>:
40
- ```
 
 
 
41
 
42
- ## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!
43
 
44
- llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508
45
 
46
- I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.
 
 
 
 
 
 
47
 
48
- For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.
 
49
 
50
  ## Provided files
51
- | Name | Quant method | Bits | Size | RAM required | Use case |
52
  | ---- | ---- | ---- | ---- | ---- | ----- |
53
- `OpenAssistant-30B-epoch7.ggmlv3.q4_0.bin` | q4_0 | 4bit | 20.3GB | 23GB | 4-bit. |
54
- `OpenAssistant-30B-epoch7.ggmlv3.q4_1.bin` | q4_1 | 4bit | 22.4GB | 25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
55
- `OpenAssistant-30B-epoch7.ggmlv3.q5_0.bin` | q5_0 | 5bit | 22.4GB | 25GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
56
- `OpenAssistant-30B-epoch7.ggmlv3.q5_1.bin` | q5_1 | 5bit | 24.4GB | 27GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
57
- `OpenAssistant-30B-epoch7.ggmlv3.q8_9.bin` | q8_0 | 8bit | 24.4GB | 27GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.|
58
-
 
 
 
 
 
 
 
 
 
59
 
60
  ## How to run in `llama.cpp`
61
 
62
  I use the following command line; adjust for your tastes and needs:
63
 
64
  ```
65
- ./main -t 18 -m OpenAssistant-30B-epoch7.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a very story about llamas <|assistant|>:"
66
  ```
 
67
 
68
- Change `-t 18` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
69
 
70
- ## How to run in `text-generation-webui`
71
 
72
- GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
73
 
74
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
75
 
76
- Note: at this time text-generation-webui will likely not support the newly updated llama.cpp quantisation methods.
77
-
78
- **Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) so that you can likely get support for the new quantisation methods sooner.
79
-
80
  <!-- footer start -->
81
  ## Discord
82
 
@@ -97,11 +118,158 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
97
  * Patreon: https://patreon.com/TheBlokeAI
98
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
99
 
100
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
101
 
102
  Thank you to all my generous patrons and donaters!
 
103
  <!-- footer end -->
104
- # Original model card
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105
 
106
  ```
107
  llama-30b-sft-7:
 
1
  ---
 
2
  inference: false
3
+ license: other
4
  ---
5
+
6
  <!-- header start -->
7
  <div style="width: 100%;">
8
  <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
17
  </div>
18
  <!-- header end -->
19
 
20
+ # OpenAssistant SFT 7 Llama 30B GGML
21
 
22
+ These files are GGML format model files for [OpenAssistant SFT 7 Llama 30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor).
23
 
24
+ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
+ * [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
28
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
+ * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
  ## Repositories available
32
 
33
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ)
34
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML)
35
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF)
36
 
37
+ <!-- compatibility_ggml start -->
38
+ ## Compatibility
39
 
40
+ ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
 
42
+ I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
+
44
+ They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
45
+
46
+ ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
+
48
+ These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
+ They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
51
 
52
+ ## Explanation of the new k-quant methods
53
 
54
+ The new methods available are:
55
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
+ * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
 
62
+ Refer to the Provided Files table below to see what files use which methods, and how.
63
+ <!-- compatibility_ggml end -->
64
 
65
  ## Provided files
66
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
  | ---- | ---- | ---- | ---- | ---- | ----- |
68
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB | 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
69
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB | 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
70
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB | 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
71
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB | 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
72
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
73
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB | 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
74
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB | 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
75
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
76
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
77
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB | 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
78
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB | 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
79
+ | OpenAssistant-SFT-7-Llama-30B.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
80
+
81
+
82
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
83
 
84
  ## How to run in `llama.cpp`
85
 
86
  I use the following command line; adjust for your tastes and needs:
87
 
88
  ```
89
+ ./main -t 10 -ngl 32 -m OpenAssistant-SFT-7-Llama-30B.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
90
  ```
91
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
92
 
93
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
94
 
95
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
96
 
97
+ ## How to run in `text-generation-webui`
98
 
99
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
100
 
 
 
 
 
101
  <!-- footer start -->
102
  ## Discord
103
 
 
118
  * Patreon: https://patreon.com/TheBlokeAI
119
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
120
 
121
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
122
+
123
+ **Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
124
 
125
  Thank you to all my generous patrons and donaters!
126
+
127
  <!-- footer end -->
128
+
129
+ # Original model card: OpenAssistant SFT 7 Llama 30B
130
+
131
+
132
+ # OpenAssistant LLaMA 30B SFT 7
133
+
134
+ Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide XOR weights for the OA models.
135
+
136
+ Thanks to Mick for writing the `xor_codec.py` script which enables this process
137
+
138
+ ## The Process
139
+
140
+ Note: This process applies to `oasst-sft-7-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different..
141
+
142
+ **This process is tested only on Linux (specifically Ubuntu). Some users have reported that the process does not work on Windows. We recommend using WSL if you only have a Windows machine.**
143
+
144
+ To use OpenAssistant LLaMA-Based Models, you should have a copy of the original LLaMA model weights and add them to a `llama` subdirectory here. If you cannot obtain the original LLaMA, see the note in italic below for a possible alternative.
145
+
146
+ Ensure your LLaMA 30B checkpoint matches the correct md5sums:
147
+
148
+ ```
149
+ f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth
150
+ d9dbfbea61309dc1e087f5081e98331a consolidated.01.pth
151
+ 2b2bed47912ceb828c0a37aac4b99073 consolidated.02.pth
152
+ ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth
153
+ 4babdbd05b8923226a9e9622492054b6 params.json
154
+ ```
155
+
156
+ *If you do not have a copy of the original LLaMA weights and cannot obtain one, you may still be able to complete this process. Some users have reported that [this model](https://huggingface.co/elinas/llama-30b-hf-transformers-4.29) can be used as a base for the XOR conversion. This will also allow you to skip to Step 7. However, we only support conversion starting from LLaMA original checkpoint and cannot provide support if you experience issues with this alternative approach.*
157
+
158
+ **Important: Follow these exact steps to convert your original LLaMA checkpoint to a HuggingFace Transformers-compatible format. If you use the wrong versions of any dependency, you risk ending up with weights which are not compatible with the XOR files.**
159
+
160
+ 1. Create a clean Python **3.10** virtual environment & activate it:
161
+
162
+ ```
163
+ python3.10 -m venv xor_venv
164
+ source xor_venv/bin/activate
165
+ ```
166
+
167
+ 2. Clone transformers repo and switch to tested version:
168
+
169
+ ```
170
+ git clone https://github.com/huggingface/transformers.git
171
+ cd transformers
172
+ git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c
173
+ pip install .
174
+ ```
175
+
176
+ 3. Install **exactly** these dependency versions:
177
+
178
+ ```
179
+ pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1
180
+ ```
181
+
182
+ 4. Check `pip freeze` output:
183
+
184
+ ```
185
+ accelerate==0.18.0
186
+ certifi==2022.12.7
187
+ charset-normalizer==3.1.0
188
+ filelock==3.12.0
189
+ huggingface-hub==0.13.4
190
+ idna==3.4
191
+ numpy==1.24.2
192
+ nvidia-cublas-cu11==11.10.3.66
193
+ nvidia-cuda-nvrtc-cu11==11.7.99
194
+ nvidia-cuda-runtime-cu11==11.7.99
195
+ nvidia-cudnn-cu11==8.5.0.96
196
+ packaging==23.1
197
+ protobuf==3.20.1
198
+ psutil==5.9.5
199
+ PyYAML==6.0
200
+ regex==2023.3.23
201
+ requests==2.28.2
202
+ sentencepiece==0.1.98
203
+ tokenizers==0.13.3
204
+ torch==1.13.1
205
+ tqdm==4.65.0
206
+ transformers @ file:///mnt/data/koepf/transformers
207
+ typing_extensions==4.5.0
208
+ urllib3==1.26.15
209
+ ```
210
+
211
+ 5. While in `transformers` repo root, run HF LLaMA conversion script:
212
+
213
+ ```
214
+ python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir <input_path_llama_base> --output_dir <output_path_llama30b_hf> --model_size 30B
215
+ ```
216
+
217
+ 6. Run `find . -type f -exec md5sum "{}" +` in the conversion target directory (`output_dir`). This should produce exactly the following checksums if your files are correct:
218
+
219
+ ```
220
+ 462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin
221
+ e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin
222
+ 9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin
223
+ aee09e21813368c49baaece120125ae3 ./generation_config.json
224
+ 92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin
225
+ 3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin
226
+ eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
227
+ 99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin
228
+ 598538f18fed1877b41f77de034c0c8a ./config.json
229
+ fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json
230
+ fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json
231
+ edd1a5897748864768b1fab645b31491 ./tokenizer_config.json
232
+ 6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json
233
+ 5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin
234
+ ```
235
+
236
+ **Important: You should now have the correct LLaMA weights and be ready to apply the XORs. If the checksums above do not match yours, there is a problem.**
237
+
238
+ 7. Once you have LLaMA weights in the correct format, you can apply the XOR decoding:
239
+
240
+ ```
241
+ python xor_codec.py oasst-sft-7-llama-30b/ oasst-sft-7-llama-30b-xor/ llama30b_hf/
242
+ ```
243
+
244
+ You should **expect to see one warning message** during execution:
245
+
246
+ `Exception when processing 'added_tokens.json'`
247
+
248
+ This is normal. **If similar messages appear for other files, something has gone wrong**.
249
+
250
+ 8. Now run `find . -type f -exec md5sum "{}" +` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these checksums:
251
+
252
+ ```
253
+ 8ae4537c64a1ef202d1d82eb0d356703 ./pytorch_model-00007-of-00007.bin
254
+ d84f99d23369e159e50cb0597b6c9673 ./pytorch_model-00003-of-00007.bin
255
+ f7de50a725d678eb65cc3dced727842f ./pytorch_model-00001-of-00007.bin
256
+ 27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json
257
+ aee09e21813368c49baaece120125ae3 ./generation_config.json
258
+ 31a2b04b139f4af043ad04478f1497f5 ./pytorch_model-00005-of-00007.bin
259
+ a16a2dfacbde77a1659a7c9df7966d0a ./pytorch_model-00004-of-00007.bin
260
+ eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model
261
+ baa778a8679d47b085446faf97b72758 ./pytorch_model-00006-of-00007.bin
262
+ b2d64f2198ab7b53e3b8d12fbcadeb3c ./config.json
263
+ deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json
264
+ 76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json
265
+ ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json
266
+ 704373f0c0d62be75e5f7d41d39a7e57 ./special_tokens_map.json
267
+ e836168cdbbb74db51d04f25ed6408ce ./pytorch_model-00002-of-00007.bin
268
+ ```
269
+
270
+ If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers. **If your checksums do not match those above, there is a problem.**
271
+
272
+ ### Configuration
273
 
274
  ```
275
  llama-30b-sft-7: