How did you extract LoRAs?

#1
by LoneStriker - opened

Nice job with the LoRAs. I had considered doing the same thing, but couldn't find any tool that would easily extract LoRAs from the models. Most google searches turned up Stable Diffusion tools to extract LoRAs.

Thanks! A multi-lora setup on a base model is good approach for a multitude-of-experts type workflow like HelixNet. It's scalable too with new approaches such as S-LoRA allowing thousands of LoRAs to be efficiently served off the same base.

For extracting the LoRA from its merged model, I used this for the SVD factorizing: https://github.com/uukuguy/multi_loras. I ended up generating a range of LoRAs of different ranks to try (32, 48, 64 and 128) and settled on the 64 one for use in this project. The higher the rank, the greater the LoRA's fidelity but at the expense of size.

Thanks for the pointer to multi_loras! I'll give it a go.

Sign up or log in to comment