Convert to huggingface safetensors
Convert to huggingface safetensors
For that I think this model would need to be ported to the transformers library. It might take a while.
Yes please
hope this will allow to use with chat!
It will not.
hope this will allow to use with chat!
Converting it to safetensors doesn't change the fact that I'm GPU poor.
i clearly know nothing about the backend of hf. how can i help to get it going?
Hey all this interesting discussion is not related to my request, which is directed to @xai-org, requesting the model to be converted to huggingface format and safetensors
We first need the Grok1CasualLM in transformers. Hope that happens soon, then Llama.cpp Q2 π
Update: we might have one before the other
Untrue
I think I can run this model on my rig if it were 4 bit exllama2 quantized; 7x24gb. I'll do the quantizing on my own if necessary, looking forward to the safetensor format!