|
--- |
|
language: "nl" |
|
thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" |
|
tags: |
|
- Dutch |
|
- Flemish |
|
- RoBERTa |
|
- RobBERT |
|
license: mit |
|
datasets: |
|
- oscar |
|
- oscar (NL) |
|
- dbrd |
|
- lassy-ud |
|
- europarl-mono |
|
- conll2002 |
|
widget: |
|
- text: "Mijn naam is RobBERT en ik ben een taalmodel van de KU Leuven." |
|
--- |
|
|
|
<p align="center"> |
|
<img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> |
|
</p> |
|
|
|
# RobBERT: Dutch RoBERTa-based Language Model. |
|
|
|
[RobBERT](https://github.com/iPieter/RobBERT) is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many [researchers](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=7180110604335112086) and [practitioners](https://huggingface.co/models?search=robbert) for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks, |