Why are these models fp32?

#2
by supercharge19 - opened

It is glad to see interest is growing in 1bit LLMs but why are your models fp32? Did hugging face converted them to make them compatible with transformers library? How to use original 1bit models? What is the size and speed difference from fp32?

yeah it's weird, they could have used FP8 instead

Maybe it miscategorized it? Seems smaller than similar fp16 model

Maybe it miscategorized it? Seems smaller than similar fp16 model

Gemma 2.8b with fp16 is almost 5GB, while this is 3B and is 13.3GB so, I don't think it is fp16 let alone 1bit.

Ah somehow misremembering the file size sorry, not sure how that happened tbh

Yes, it seems the weight are not ternary {-1 0, 1}, instead it is float32. Even if weight datatype is float32, at least they had to make weight ternary {-1, 0, 1} so that we can see the performace of it when it really uses 1, -1, 0 in weights

Sign up or log in to comment