Error: Error converting to fp16: b'Traceback (

#36
by froilo - opened

https://huggingface.co/m-a-p/OpenCodeInterpreter-DS-33B

Error: Error converting to fp16: b'Traceback (most recent call last):\n File "/home/user/app/llama.cpp/convert.py", line 1548, in \n main()\n File "/home/user/app/llama.cpp/convert.py", line 1515, in main\n vocab, special_vocab = vocab_factory.load_vocab(vocab_types, model_parent_path)\n File "/home/user/app/llama.cpp/convert.py", line 1417, in load_vocab\n vocab = self._create_vocab_by_path(vocab_types)\n File "/home/user/app/llama.cpp/convert.py", line 1407, in _create_vocab_by_path\n raise FileNotFoundError(f"Could not find a tokenizer matching any of {vocab_types}")\nFileNotFoundError: Could not find a tokenizer matching any of ['spm', 'hfft']\n'

please use code fences when pasting code into a Discussion. Thanks!

please use code fences when pasting code into a Discussion. Thanks!

against my religion
not fencing anyone

ggml.ai org

Hi @froilo - currently the way the conversion scripts work is that they accept a tokeniser.model file in the repo and they don't fully interoperate with tokenizer.json equivalent of fast tokenizers in transformers.

Do you mind opening this as an issue in llama.cpp?

ggml.ai org

Stale issue - closing - feel free to re-open if the issue persists.

reach-vb changed discussion status to closed

Sign up or log in to comment