tokenizer.model?

#1
by jlinux - opened

I see the tokenizer files are not the same as what usually llama.cpp can convert. Is there any plans to support llama.cpp with a gguf version?

Pipable Inc org

It's a llama tokenizer , standard llama tokenizer defaults to fast tokenizer I think.
There were some consistency issues I was facing with fast tokenizer , so had defaulted it to not fast.

Screenshot 2024-02-19 at 7.20.54 AM.png

Hmm. converting with llama.cpp's convert.py it complains about the vocab size being 32022 instead of 32256. When I change config.json to 32022 it converts but cannot load it. Wanted to give you a heads up and any insight anyone can provide is appreciated.

Pipable Inc org

Can you try going into your llama model directly and editing the params.json "vocab_size" to be 32022 ?
There is a room for mismatch in model's vocab size and tokenizer's vocab size.

Screenshot 2024-02-19 at 8.11.54 AM.png

I did not find a params.json in the repo.. I added one but it appears not to make a difference. I changed the config.json and when loading the llama.cpp server it gives the following error:

llama_model_loader: - type f32: 219 tensors
llama_model_load: error loading model: unordered_map::at
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './PipableAI/pipSQL-1.3b/ggml-model-f32.gguf'
{"timestamp":1708311357,"level":"ERROR","function":"load_model","line":377,"message":"unable to load model","model":"./PipableAI/pipSQL-1.3b/ggml-model-f32.gguf"}
terminate called without an active exception
Aborted

Give us a day will debug and update this.
Thank you so much for pointing us to it.

Apologies for spinning cycles, I was stepping over the own feet. I generated the tokenizer.model from the working pytorch python code which compounded issue that I was using huggingface vocab instead of BPE which produced the errors above. Using bpe solved the issue and successfully generated the GGUF.

Like this model a lot, appreciate everyone's hand in its success.

jlinux changed discussion status to closed

Sign up or log in to comment