Edit model card

Fine-tuned Flair Model on Dutch ICDAR-Europeana NER Dataset

This Flair model was fine-tuned on the Dutch ICDAR-Europeana NER Dataset using hmBERT 64k as backbone LM.

The ICDAR-Europeana NER Dataset is a preprocessed variant of the Europeana NER Corpora for Dutch and French.

The following NEs were annotated: PER, LOC and ORG.

Results

We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:

  • Batch Sizes: [4, 8]
  • Learning Rates: [3e-05, 5e-05]

And report micro F1-score on development set:

Configuration Seed 1 Seed 2 Seed 3 Seed 4 Seed 5 Average
bs8-e10-lr3e-05 0.8405 0.8318 0.8437 0.8346 0.8444 0.839 ± 0.0056
bs4-e10-lr3e-05 0.8467 0.8303 0.8238 0.8386 0.8274 0.8334 ± 0.0092
bs8-e10-lr5e-05 0.8284 0.8345 0.831 0.8229 0.8368 0.8307 ± 0.0054
bs4-e10-lr5e-05 0.8158 0.8142 0.8164 0.8249 0.8228 0.8188 ± 0.0047

The training log and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.

More information about fine-tuning can be found here.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for stefan-it/hmbench-icdar-nl-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1

Finetuned
(259)
this model