Great work! Can the FP16 weights be released?

#6
by digitous - opened

If you don't mind. The 13b ones are released, for parity sake and model merging it would be a great boon.

Otherwise I'm going to have to figure out how to adjust this script to reverse a q_5
https://github.com/ductai199x/llama.cpp/blob/84ba1fd25b1a0f12d11a655fecf90c7cbb0babb0/convert_ggml_to_pth.py

Sign up or log in to comment