RobBERTje is a collection of distilled models based on RobBERT. There are multiple models with different sizes and different training settings, which you can choose for your use-case.
We are also continuously working on releasing better-performing models, so watch the repository for updates.
- February 21, 2022: Our paper about RobBERTje has been published in volume 11 of CLIN journal!
- July 2, 2021: Publicly released 4 RobBERTje models.
- May 12, 2021: RobBERTje was accepted at CLIN31 for an oral presentation!
|Model||Description||Parameters||Training size||Huggingface id|
|Non-shuffled||Trained on the non-shuffled variant of the oscar corpus, without any operations to preserve this order during training and distillation.||74 M||1 GB||this model|
|Shuffled||Trained on the publicly available and shuffled OSCAR corpus.||74 M||1 GB||DTAI-KULeuven/robbertje-1-gb-shuffled|
|Merged (p=0.5)||Same as the non-shuffled variant, but sequential sentences of the same document are merged with a probability of 50%.||74 M||1 GB||DTAI-KULeuven/robbertje-1-gb-merged|
|BORT||A smaller version with 8 attention heads instead of 12 and 4 layers instead of 6 (and 12 for RobBERT).||46 M||1 GB||DTAI-KULeuven/robbertje-1-gb-bort|
We calculated the pseudo perplexity (PPPL) from cite, which is a built-in metric in our distillation library. This metric gives an indication of how well the model captures the input distribution.
We also evaluated our models on sereral downstream tasks, just like the teacher model RobBERT. Since that evaluation, a Dutch NLI task named SICK-NL was also released and we evaluated our models with it as well.
- Downloads last month