Unable to recover checkpoints in applying the XORs #13

by - opened

Hi, I am trying to recover pygmalion-7b checkpoints, and I use the llama checkpoints in https://huggingface.co/nyanko7/LLaMA-7B/tree/main. I use tools from transformers 4.32.1 to convert the llama into huggingface format. However, after this, I run the xor_codec.py, but cannot get the same rhash. The result is:
4608facb4910118f8dfa80f090cbc4dc config.json
2917a1cafb895cf57e746cfd7696bfe5 generation_config.json
7c1a3308e53852f3480ea70ded180460 pytorch_model-00001-of-00002.bin
6a5100ef132c7c4ec0b4bf1716870357 pytorch_model-00002-of-00002.bin
81648ef3915ed2e83d49fed93122d53e pytorch_model.bin.index.json
6b2e0a735969660e720c27061ef3f3d3 special_tokens_map.json
9b3cf7b8c0e4783dbc1419b4cafe8e1e tokenizer_config.json
fdb311c39b8659a5d5c1991339bafc09 tokenizer.json
eeec4125e9c7560836b4873b6f8e3025 tokenizer.model
I make sure the sha256sum of checkpoints of the llama-7b and the xor_encoded_files are correct.
Could you explain why I can't get the correct model?

Sign up or log in to comment