--- license: other --- # OpenAssistant LLaMa-Based Models Due to the license attached to LLaMa models by Meta AI it is not possible to directly distribute LLaMa-based models. Instead we provide XOR weights for the OA models. Thanks to Mick for writing the `xor_codec.py` script which enables this process ## The Process Note: This process applies to `oasst-sft-6-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different.. To use OpenAssistant LLaMa-Based Models, you need to have a copy of the original LLaMa model weights and add them to a `llama` subdirectory here. Ensure your LLaMa 30B checkpoint matches the correct md5sums: ``` f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth d9dbfbea61309dc1e087f5081e98331a consolidated.01.pth 2b2bed47912ceb828c0a37aac4b99073 consolidated.02.pth ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth 4babdbd05b8923226a9e9622492054b6 params.json ``` These can be converted to HuggingFace Transformers-compatible weights using the script available [here](https://github.com/huggingface/transformers/blob/28f26c107b4a1c5c7e32ed4d9575622da0627a40/src/transformers/models/llama/convert_llama_weights_to_hf.py). **Important**: It was tested with git version transformers 4.28.0.dev0 (git hash: **28f26c107b4a1c5c7e32ed4d9575622da0627a40**). Make sure the package tokenizers 0.13.3 is installed. Use of different versions may result in broken outputs. ``` PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python python convert_llama_weights_to_hf.py --input_dir ~/llama/ --output_dir ~/llama30b_hf/ --model_size 30B ``` Run `find -type f -exec md5sum "{}" + > checklist.chk` in the conversion target directory. This should produce a `checklist.chk` with exactly the following content if your files are correct: ``` d0e13331c103453e9e087d59dcf05432 ./pytorch_model-00001-of-00007.bin 29aae4d31a0a4fe6906353001341d493 ./pytorch_model-00002-of-00007.bin b40838eb4e68e087b15b3d653ca1f5d7 ./pytorch_model-00003-of-00007.bin f845ecc481cb92b8a0586c2ce288b828 ./pytorch_model-00004-of-00007.bin f3b13d089840e6caf22cd6dd05b77ef0 ./pytorch_model-00005-of-00007.bin 12e0d2d7a9c00c4237b1b0143c48a05e ./pytorch_model-00007-of-00007.bin 1348f7c8bb3ee4408b69305a10bdfafb ./pytorch_model-00006-of-00007.bin aee09e21813368c49baaece120125ae3 ./generation_config.json eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model 598538f18fed1877b41f77de034c0c8a ./config.json fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json b77e99aa2ddc3df500c2b2dc4455a6af ./pytorch_model.bin.index.json edd1a5897748864768b1fab645b31491 ./tokenizer_config.json 6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json ``` Once you have LLaMa weights in the correct format, you can apply the XOR decoding: ``` python xor_codec.py oasst-sft-6-llama-30b/ oasst-sft-6-llama-30b-xor/ llama30b_hf/ ``` You should expect to see one warning message during execution: `Exception when processing 'added_tokens.json'` This is normal. If similar messages appear for other files, something has gone wrong. Now run `find -type f -exec md5sum "{}" + > checklist.chk` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these contents: ``` 970e99665d66ba3fad6fdf9b4910acc5 ./pytorch_model-00007-of-00007.bin 659fcb7598dcd22e7d008189ecb2bb42 ./pytorch_model-00003-of-00007.bin ff6e4cf43ddf02fb5d3960f850af1220 ./pytorch_model-00001-of-00007.bin 27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json aee09e21813368c49baaece120125ae3 ./generation_config.json 740c324ae65b1ec25976643cda79e479 ./pytorch_model-00005-of-00007.bin f7aefb4c63be2ac512fd905b45295235 ./pytorch_model-00004-of-00007.bin eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model 369df2f0e38bda0d9629a12a77c10dfc ./pytorch_model-00006-of-00007.bin 27b9c7c8c62db80e92de14724f4950f3 ./config.json deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json 76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json 130f5e690becc2223f59384887c2a505 ./special_tokens_map.json ae48c4c68e4e171d502dd0896aa19a84 ./pytorch_model-00002-of-00007.bin ``` If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers.