Text Generation
Transformers
PyTorch
Safetensors
GGUF
Norwegian
Norwegian Bokmål
Norwegian Nynorsk
mistral
norwegian
instruction
chat
conversational
Inference Endpoints
text-generation-inference

Share parameters for GGUF quantization?

#6
by espenhk - opened

Hi, thanks for sharing this great project!

Really appreciate that GGUF files are available for this model, but would like to have that for the scratch-models as well. I'm able to quantize that myself, but would love to know if there are any particular parameters you set when doing the quantization which I should set to match performance.

If you can provide any necessary information, I'm happy to do this quantization on my machine and PR it into the scratch-model repos :)

Norwegian Large Language Models org

Hi, I was planning to do the trained-from-scratch models today. As for the parameters we use the default ones except that when converting (using the convert.py file) you need to specify that the vocab type is BPE. That would be the only real parameter.

Hi,

Alright: I worked out that to get it running I had to set vocab type to BPE, so in that case I've done it for you (for the normistral-scratch model, haven't done the Bloom one). I've used the same formats as the ones shared in this repo, so if you want the files let me know how best to get them to you :)

Norwegian Large Language Models org

Cool, then you can do a PR to that repo (normistral-7b-scratch) if you can!

OK, on it!

davda54 changed discussion status to closed

Sign up or log in to comment