Update.

#2
by KatyTheCutie - opened

Hey, sorry for not responding, I haven't been active much, I have failed to train models smaller than 7B, for some reason it seems most fine-tuning frameworks don't seem to support them well, I think I can try fine-tune one of the newest Qwen2 series with all the data. Which parameter size would you like me to do?

hi Katy, thank you for reply , what dataset selected for fine tuning one of those i shared before in 7z archives (dataset_yiffycomebrain_10k.7z)?
So now i have new RTX 3050 6G, i tested koboldcpp on mistral-7b and llama-3 it works good.
the size 7b or 8b will be ok

Sign up or log in to comment