GEITje Ultra banner

GEITje 7B ultra (GGUF version)

A conversational model for Dutch, aligned through AI feedback.

This is a GGUF version of BramVanroy/GEITje-7B-ultra, a powerful Dutch chatbot, which ultimately is Mistral-based model, further pretrained on Dutch and additionally treated with supervised-finetuning and DPO alignment. For more information on the model, data, licensing, usage, see the main model's README.

Citation

If you use GEITje 7B Ultra (SFT) or any of its derivatives or quantizations, place cite the following paper:

@misc{vanroy2024geitje7bultraconversational,
      title={GEITje 7B Ultra: A Conversational Model for Dutch}, 
      author={Bram Vanroy},
      year={2024},
      eprint={2412.04092},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.04092}, 
}

Available quantization types and expected performance differences compared to base f16, higher perplexity=worse (from llama.cpp):

Q3_K_M  :  3.07G, +0.2496 ppl @ LLaMA-v1-7B
Q4_K_M  :  3.80G, +0.0532 ppl @ LLaMA-v1-7B
Q5_K_M  :  4.45G, +0.0122 ppl @ LLaMA-v1-7B
Q6_K    :  5.15G, +0.0008 ppl @ LLaMA-v1-7B
Q8_0    :  6.70G, +0.0004 ppl @ LLaMA-v1-7B
F16     : 13.00G              @ 7B

Also available on ollama.

Quants were made with release b2777 of llama.cpp.

Usage

LM Studio

You can use this model in LM Studio, an easy-to-use interface to locally run optimized models. Simply search for BramVanroy/GEITje-7B-ultra-GGUF, and download the available file.

Ollama

The model is available on ollama.

Downloads last month
600
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for BramVanroy/GEITje-7B-ultra-GGUF

Quantized
(4)
this model

Dataset used to train BramVanroy/GEITje-7B-ultra-GGUF

Collection including BramVanroy/GEITje-7B-ultra-GGUF