stefan-it's picture
readme: add initial version (#1)
34b1602
|
raw
history blame
5.2 kB
metadata
language: fi
license: mit
tags:
  - flair
  - token-classification
  - sequence-tagger-model
base_model: hmbyt5-preliminary/byt5-small-historic-multilingual-span20-flax
inference: false
widget:
  - text: >-
      Rooseveltin sihteeri ilmoittaa perättö - mäksi tiedon , että Rooseveltia
      olisi kehotettu käymään Englannissa , Saksassa ja Venäjällä puhumassa San
      Franciscon näyttelyn puolesta .

Fine-tuned Flair Model on Finnish NewsEye NER Dataset (HIPE-2022)

This Flair model was fine-tuned on the Finnish NewsEye NER Dataset using hmByT5 as backbone LM.

The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950 in French, German, Finnish, and Swedish. More information can be found here.

The following NEs were annotated: PER, LOC, ORG and HumanProd.

⚠️ Inference Widget ⚠️

Fine-Tuning ByT5 models in Flair is currently done by implementing an own ByT5Embedding class.

This class needs to be present when running the model with Flair.

Thus, the inference widget is not working with hmByT5 at the moment on the Model Hub and is currently disabled.

This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.

Results

We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:

  • Batch Sizes: [8, 4]
  • Learning Rates: [0.00015, 0.00016]

And report micro F1-score on development set:

Configuration Run 1 Run 2 Run 3 Run 4 Run 5 Avg.
bs4-e10-lr0.00016 0.8017 0.7406 0.7706 0.7759 0.7983 77.74 ± 2.2
bs4-e10-lr0.00015 0.7729 0.7553 0.7526 0.7547 0.7913 76.54 ± 1.49
bs8-e10-lr0.00016 0.6638 0.5875 0.78 0.7804 0.7176 70.59 ± 7.34
bs8-e10-lr0.00015 0.6783 0.5867 0.7229 0.7761 0.697 69.22 ± 6.22

The training log and TensorBoard logs are also uploaded to the model hub.

More information about fine-tuning can be found here.

Acknowledgements

We thank Luisa März, Katharina Schmid and Erion Çano for their fruitful discussions about Historic Language Models.

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). Many Thanks for providing access to the TPUs ❤️