Would there a chance Jamba to be train in 1.58bit weight?

#22
by shing3232 - opened

https://github.com/ggerganov/llama.cpp/issues/5761#issuecomment-2027950269
it would probably easier if you start from scrap and train moe models?
There few group of people training 1.58bit models.

No one has tested SSMs with the 1.58Bit strategy. It is likely that research would need to be done on Mamba before it is done on Jamba as the BitNet only creates BitLinear and not BitConv1D (yet) and we don't know how something like this will perform. Then when you consider the fact that Mamba already provides efficient inference and it is more likely that other methods (speculative decoding for example) are used for more efficient inference before this kind of quantization aware training. All that being said it could still prove to be an interesting line of research, but it is not just low-hanging fruit.

Sign up or log in to comment