Add llama.cpp support please

#1
by Laoxu - opened

I get an error when run in bash:
INFO:hf-to-gguf:Loading model: XVERSE-65B-2
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:Set model tokenizer
Traceback (most recent call last):
File "convert-hf-to-gguf.py", line 2546, in
main()
File "convert-hf-to-gguf.py", line 2531, in main
model_instance.set_vocab()
File "convert-hf-to-gguf.py", line 911, in set_vocab
tokenizer = AutoTokenizer.from_pretrained(dir_model)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 880, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2110, in from_pretrained
return cls._from_pretrained(
File "/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2336, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 114, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 78 column 3

Sign up or log in to comment