Can provide the GGUF format for llama.cpp?

#3
by zhibinlu - opened

Can someone make a GGUF and GPTQ format for llama.cpp? @TheBloke

I already did make GGUF, and GPTQs are processing

actually GPTQs just completed a minute ago. They're all there now

actually GGUFs just completed a minute ago. They're all there now
Thanks a lot! 🫡

Sign up or log in to comment