Bug with GGUF and Llama3

#1
by synbiotik - opened

Thank you for pointing this out. I've converted FP16 to FP32 before quantization. I'll keep an eye on this and reupload fixed models once the issue is resolved. But I'll be honest I didn't spot any issues so far.

Sign up or log in to comment