How does this model compare to anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g?

#2
by amartinr - opened

I think both models come from the same model from chavinlo, but wanted to know what are their main differences.
anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g

Great work, BTW!

This is for CPU based solutions such as https://koboldai.org/cpp and the other one is for GPTQ.
Performance is similar.

This one was converted to GGML from the original GPT4 x Alpaca 13B model, then quantized. The one by anon was quantized with GPTQ, then converted to GGML after. Mine should most likely give slightly better results due to less data loss. But I'm not quite sure, especially given that mine is a bit smaller.

i like the one here better (my opinion)

Sign up or log in to comment