Missing tokenizer.model file

#6
by whatever1983 - opened

@TheBloke : how did you use llama.cpp ./quantize to get those GGUFs when the tokenizer.model file isn't even uploaded by the model creator? If you managed to do a tokenizer.model file yourself, care to share the file on this repo? Also for the 33B one.

Thanks

Yeah i couldn't make it using the standard Llama.cpp convert.py. Fortunately there was a PR to add support for the HF tokenizer format. That had a few problems and bugs at first but after a few fixes I was able to make working GGUFs.

That PR hasn't been merged as its still being reviewed. But if you want to make your own GGUFs of this model or any other without tokenizer.model you can use the covert.py from https://github.com/ggerganov/llama.cpp/pull/3633

Hey @TheBloke , Please provide a quantize for this model: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B I followed the same strategy (PR), but I wasn't successful.

@TheBloke hello, accorfing to you suggestions, it looks like the convet.py from https://github.com/ggerganov/llama.cpp/pull/3633 have been merged , so I have tried to quantize deepseek model by self but it failed like this. I have find many issuse on the github but can not fix it, I'd appreciate it if you had some advice.

Sign up or log in to comment