Updating the GGUF files

#4
by Armada22 - opened

Are you planning on using the newest version of llama.cpp that implements the fixes from https://github.com/ggerganov/llama.cpp/pull/6920 to update the files?

Owner

Yes, but later!
No worry!

Owner

Alright, I tried to do it, here's my issue :

  • If I use convert.py => don't work
  • If I use convert-hf-to-gguf.py => don't work
  • If I use convert.py with --vocab-type bpe => work but the outputed files are the same

Hum...
I didn't used the tokenizer.model from Llama3 (only the tokenizer.json) and I didn't got any issue with the token myself even when people told us they have... So I dunno, please report if you have issue.

Owner

I currently fix all the model, this one is the next in some minutes. Have fun!

Thank you so much!

Armada22 changed discussion status to closed

Sign up or log in to comment