Was unable to convert this in llama.cpp

#1
by vbuhoijymzoi - opened
Loading model file /content/models/Qwen1.5-14B-Chat/model-00001-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00001-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00002-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00003-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00004-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00005-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00006-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00007-of-00008.safetensors
Loading model file /content/models/Qwen1.5-14B-Chat/model-00008-of-00008.safetensors
params = Params(n_vocab=152064, n_embd=5120, n_layer=40, n_ctx=32768, n_ff=13696, n_head=40, n_head_kv=40, n_experts=None, n_experts_used=None, f_norm_eps=1e-06, rope_scaling_type=None, f_rope_freq_base=1000000.0, f_rope_scale=None, n_orig_ctx=None, rope_finetuned=None, ftype=None, path_model=PosixPath('/content/models/Qwen1.5-14B-Chat'))
Found vocab files: {'tokenizer.model': None, 'vocab.json': PosixPath('/content/models/Qwen1.5-14B-Chat/vocab.json'), 'tokenizer.json': PosixPath('/content/models/Qwen1.5-14B-Chat/tokenizer.json')}
Loading vocab file '/content/models/Qwen1.5-14B-Chat/vocab.json', type 'spm'
Traceback (most recent call last):
  File "/content/llama.cpp/convert.py", line 1478, in <module>
    main()
  File "/content/llama.cpp/convert.py", line 1446, in main
    vocab, special_vocab = vocab_factory.load_vocab(args.vocab_type, model_parent_path)
  File "/content/llama.cpp/convert.py", line 1332, in load_vocab
    vocab = SentencePieceVocab(
  File "/content/llama.cpp/convert.py", line 394, in __init__
    self.sentencepiece_tokenizer = SentencePieceProcessor(str(fname_tokenizer))
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 447, in Init
    self.Load(model_file=model_file, model_proto=model_proto)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 905, in Load
    return self.LoadFromFile(model_file)
  File "/usr/local/lib/python3.10/dist-packages/sentencepiece/__init__.py", line 310, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] 

sentencepiece version: 0.1.99

vbuhoijymzoi changed discussion title from Was unable to convert this in llama.cpp to <delete>
vbuhoijymzoi changed discussion title from <delete> to Was unable to convert this in llama.cpp

python3 convert-hf-to-gguf.py models/qwen-xyz --outfile models/qwen-xyz/ggml-model-f16.gguf --outtype f16

python3 convert-hf-to-gguf.py models/qwen-xyz --outfile models/qwen-xyz/ggml-model-f16.gguf --outtype f16

This didn't work when I tried at the time.

Sign up or log in to comment