Have these quants had their pre-tokenizer fixed?

#8
by smcleod - opened

Many llama 3 quantizations were created with a missing pre-tokenizer, has this been fixed in these quants?

llm_load_vocab: missing pre-tokenizer type, using: 'default'
llm_load_vocab: ************************************
llm_load_vocab: GENERATION QUALITY WILL BE DEGRADED!
llm_load_vocab: CONSIDER REGENERATING THE MODEL
llm_load_vocab: ************************************
LM Studio Community org

Btw @smcleod these ones haven't been updated yet, will be soon

smcleod changed discussion status to closed

Sign up or log in to comment