What is the current method to convert these models to work with Llama.cpp?

#1
by Derg - opened

I have tried converting the model using some older methods floating around and it seems to not be converting them to the correct ggml format. What is the current methodology or scripts for this? Specifically regarding converting the .bin to .pth EDIT: Nevermind, I have it working in llama.cpp now. Great job!

I have tried converting the model using some older methods floating around and it seems to not be converting them to the correct ggml format. What is the current methodology or scripts for this? Specifically regarding converting the .bin to .pth EDIT: Nevermind, I have it working in llama.cpp now. Great job!

how??

Could a kind soul provide a magnet of the ggml quantitized binaries for llama.cpp?

Yeah, how ?

Just use convert.py script in the root of llama.cpp

Sign up or log in to comment