Thanks but how did you extract the Lora??

#1
by YaTharThShaRma999 - opened

I really appreciate that you made Lora’s of the model. That really helped me to run the helixnet model.

However one thing I saw is that you say extract Lora’s? How did you exactly do that?

The process is to compare the fine-tuned model with the original model (Mistral in this case), creating a delta of changes for each of the model's layers. You then factorize this using SVD with a given rank, which will eventually give you the A,B matrix to what an equivalent LoRA would be.

There's a github project that I followed here: https://github.com/uukuguy/multi_loras - I ended up generating a range of LoRAs of different ranks to try (32, 48, 64 and 128) and settled on the 64 one for use in this project. The higher the rank, the greater the LoRA's fidelity but at the expense of size.

Sign up or log in to comment