This model is a fine-tuned version of GroNLP/bert-base-dutch-cased on GroNLP/dutch-cola, from where only the acceptable and unacceptable examples were used to fine-tune. The data used for fine-tuning therefore only contains sentences with original mark 'None' and '*'.

Accuracy: 0.8127

The following hyperparameters were used during training:

learning rate: 4e-05
train_batch_size: 16
eval_batch_size: 16
seed: 42
num_epochs: 4

Epoch Training Loss Step Validation Loss Accuracy
1.0 0.4971 1169 0.4629 0.7914
2.0 0.2478 2338 0.7041 0.7814
3.0 0.1333 3507 0.9777 0.8064
4.0 0.0687 4676 1.1322 0.8127
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train HylkeBr/bertje_dutch-cola