stefan-it's picture
readme: update model hub links
2a7f750
|
raw
history blame
No virus
4.63 kB
---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-cased
widget:
- text: Je suis convaincu , a-t43 dit . que nous n"y parviendrions pas , mais nous
ne pouvons céder parce que l' état moral de nos troupe* en souffrirait trop .
( Fournier . ) Des avions ennemis lancent dix-sept bombes sur Dunkerque LONDRES
. 31 décembre .
---
# Fine-tuned Flair Model on French ICDAR-Europeana NER Dataset
This Flair model was fine-tuned on the
[French ICDAR-Europeana](https://github.com/stefan-it/historic-domain-adaptation-icdar)
NER Dataset using hmBERT as backbone LM.
The ICDAR-Europeana NER Dataset is a preprocessed variant of the
[Europeana NER Corpora](https://github.com/EuropeanaNewspapers/ner-corpora) for Dutch and French.
The following NEs were annotated: `PER`, `LOC` and `ORG`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.7731][1] | [0.7696][2] | [0.7666][3] | [0.7823][4] | [0.7714][5] | 77.26 ± 0.53 |
| bs4-e10-lr5e-05 | [0.774][6] | [0.7571][7] | [0.7685][8] | [0.7694][9] | [0.7704][10] | 76.79 ± 0.57 |
| bs8-e10-lr5e-05 | [0.7675][11] | [0.7698][12] | [0.7601][13] | [0.7657][14] | [0.7641][15] | 76.54 ± 0.33 |
| bs8-e10-lr3e-05 | [0.7596][16] | [0.7697][17] | [0.7711][18] | [0.7628][19] | [0.7574][20] | 76.41 ± 0.54 |
[1]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-icdar-fr-hmbert-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️