May i ask how should I convert my fine tuned alpaca lora to the format like yours?

#6
by chouks - opened

Sorry to bother you.
But i've been looking around for a while but yet to find to solution.
After I fine turned my version of alpaca lora, all I get is files like "adapter_model.bin" and "adapter_config.json" along with some checkpoint, I can use them with the script "generate.py" provided by the https://github.com/tloen/alpaca-lora.
But I couldn't find a way to build a "complete" model so I can use transformers like all the other llama-based models.
Then I ran into your project, you did it fine.
So sorry for asking a question that is not directly related to your model, but how did you convert your lora weights to this model?
Thank you in advance.

Hi @chouks , no problem.

This Python script can merge your LoRa weights with the base Llama model. You can edit the model path with your fine-tuned version: https://github.com/tloen/alpaca-lora/blob/main/export_hf_checkpoint.py

Sign up or log in to comment