Performance worse than plan alpaca lora 30b?

#2
by Davidliudev - opened

https://huggingface.co/elinas/alpaca-30b-lora-int4

Evals - Groupsize 128 + True Sequential

alpaca-30b-4bit-128g.safetensors [4805cc2]
c4-new - 6.398105144500732
ptb-new - 8.449508666992188
wikitext2 - 4.402845859527588

ummm... so this is worse than a plain alpaca lora 30b?

Those numerical values are synthetic. It's a small increase of ppl (OA's native fine-tune causes this) But you get much better prose and it should provide an improved chat performance than plain alpaca.

Got you. Will give it a try. Thanks for your work!

Sign up or log in to comment