Is there a way to do it locally?

#2
by Zaesar - opened

Hi,

Came to find this space suggested for merge my finetuned model with Autotrain to after convert into GGUF format.

I've been suggested to make it work I have to first merge it with the base model and this space should do the trick.

So my question is how to do it locally?

Appreciate

Hi @Zaesar ,

Clone the project using the following command:

git clone https://huggingface.co/spaces/Weyaxi/merge-lora

After that, run the app with the following command:

python3 app.py

I noticed you created a Mistral finetune. Perhaps this space RAM will be sufficient for that. Have you tried it?

Feel free to ping me btw :)

Thanks a lot!

Well have to look at the docs, but I assume is pretty straight forward. I'm using a NVIDIA 4090 so I guess it will not be a problem.

My idea is after to use mrfakename/convert-to-gguf hope it works!

If you have any suggestion I will be very happy to listen

You don't actually need a GPU for this, but a 4090 will work fine :)

If you're merging the 7b model, this space should do the trick without requiring you to clone it.

I don't know anything about converting to GGUF, sorry.

Thanks a lot!

It worked. Also to convert it to .gguf

Now I have to make some tests, but it seems fine.

Thanks and happy X-mas!

Oh,

After doing the merge, it doesn't know anything of the information I've finetuned the model.

It is like just the base model. All the other information seems to be lost.

Is there a anyway to tweak it? Maybe I can make it like be more aware of the finetuned data?

Appreciate

Hi @Zaesar ,

That problem is related to your fine-tuning progress or dataset, I guess. So, I do not know anything about that.

Ok, thank you

Sign up or log in to comment