Help with merging lora back into llama

#1
by WesPro - opened

Hi,

thanks for sharing your LoRa models. I can only make really small ones by myself on my 3050RTX Laptop GPU so I really appreciate it. Are there any good resources to learn how to integrate the LoRa back into a Llama3 based model? I'm kind of stuck after the first few steps. I get to loading both the base_model lora_model from my hdd. I even get to a point where I have a 5gb safetensor file in my cache but when I try to move on I get several different errors for variables not assigned (I think I did though). Do you have like a good step by step tutorial for this?

Resplendent AI org
edited Apr 20

The easiest method is to just drop it into mergekit in a config like this:

models:
  - model: Model+LoRA
    parameters:
      weight: 1.0
merge_method: linear
dtype: float16

Mergeit GUI is free and it supports LoRA merging as well as uploading directly to your page with an access token: https://huggingface.co/spaces/arcee-ai/mergekit-gui

Thanks for your help. I finally figured it out after 15 tries that I need to let the "" away when specifying the model paths to where I have them stored on my HDD

Sign up or log in to comment