Vocab size mismatch (model has 32000, but tokenizer.model has 64000)
#4
by
mradermacher
- opened
This error keeps me from quantizing it for experiments.
@mradermacher apologies. I'm re-running the moe build to upload the tokenizer.model again. Should be up tonight. Thank you for the heads up.
@mradermacher I just up'd the tokenizer.model again. That should do it.
Cheers! That was quick. Loads fine now. I am currently preparing static gguf quants at https://huggingface.co/mradermacher/giant-hydra-moe-240b-GGUF, and when that works out well, weighted ones at https://huggingface.co/mradermacher/giant-hydra-moe-240b-i1-GGUF (might be a week), hopefully making the model more available to people.
mradermacher
changed discussion status to
closed