Error Quantizing to GGUF
#3
by
cfahlgren1
HF staff
- opened
I have had issue converting both finetunes of deepseek-coder-6.7b-instruct
and deepseek-coder-7b-instruct-v1.5
to GGUF. I noticed others have had issues as well.
Is there anything special that needs to be done?
Traceback (most recent call last):
File "/content/llama.cpp/convert.py", line 1474, in <module>
main()
File "/content/llama.cpp/convert.py", line 1442, in main
vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
File "/content/llama.cpp/convert.py", line 1328, in load_vocab
vocab = SentencePieceVocab(
File "/content/llama.cpp/convert.py", line 394, in __init__
self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 447, in Init
self.Load(model_file=model_file, model_proto=model_proto)
File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
cfahlgren1
changed discussion status to
closed