Upload tokenizer.model with huggingface_hub
#1
by
TheBloke
- opened
No description provided.
This adds the missing tokenizer.model
, which I copied from your tigerbot-70b-chat-v1 model. tokenizer.model is necessary to make GGUF quantisations.
Thanks for the really interesting models! I have quantised them here:
https://huggingface.co/TheBloke/TigerBot-70B-Chat-GPTQ
https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF
nice catch
i4never
changed pull request status to
merged