Edit model card

RobBERT: A Dutch RoBERTa-based Language Model

RobBERT: Dutch RoBERTa-based Language Model.

RobBERT is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many researchers and practitioners for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks,

Downloads last month
48,909
Hosted inference API
Token Classification
Examples
Examples
This model can be loaded on the Inference API on-demand.

Datasets used to train pdelobelle/robbert-v2-dutch-ner