Edit model card

Pieter Delobelle, François Remy, Miryam de Lhoneux, Thomas Demeester

Tweety-7b-dutch: A Dutch Large Language Model

Model Card for tweety-7b-dutch

tweety-7b-dutch is a foundation model with a focus on the Dutch language, incorporating a Dutch tokenizer for better understanding and generation of Dutch text. It's built on the mistral architecture, employing flash attention for efficient processing within a context window of 8192 tokens. Tweety-7b-dutch is trained on the cleaned Dutch mC4 dataset, without instruction finetuning.

Model Details

Model Description

Our tweety-7b-dutch model has an Apache 2.0 license, encouraging applications in research, content creation, and language analysis.


As a base model, tweety-7b-dutch is primed for direct applications across text generation and understanding within the Dutch language.

Technical Specifications

Compute Infrastructure

Training utilized Nvidia H100 and A100 GPUs. Inference is accessible on lower-end GPUs, basically any GPU capable of running mistral models.

Model Weights

  • This model was trained in bfloat16.
  • GGUF weights are released by Bram Vanroy.
Downloads last month
Model size
7.39B params
Tensor type