Is it older attempt stopped at 1200 steps or the newer one stopped at 1000 steps?

#3
by adamo1139 - opened

Aeala released two LoRA's for this model. One 3.2 GB trained for 1200 steps and then one trained on updated QLoRA repo that was trained for 1000 steps and weights 1.6 GB. Are all quantizations in this repo, even the new methods, merges of the older 1200 step 3.2 GB LoRA with base model?

All the files will be whatever was in Aeala's repo 9 days ago, which I assume is the 1200 step one?

When I did the new quants I didn't re-merge the LoRA. I re-quantised from the fp16 merge I did when I first made the repo 9 days ago.

If there's a new and better version out then I can add this to my list of models, and re-do all the quants in the next few days.

I did release a new one since then that was redone with a newer version of the QLoRA repo, so if there were bugs caused from the last one, they're likely fixed this one. Up to you whether you want to redo them of course. ^~^ Thanks for your work!

Sign up or log in to comment