stefan-it commited on
Commit
17a04d9
1 Parent(s): dbf2c0b

Upload folder using huggingface_hub

Browse files
Files changed (41) hide show
  1. README.md +61 -0
  2. __pycache__/flair-fine-tuner.cpython-311.pyc +0 -0
  3. __pycache__/flair-log-parser.cpython-311.pyc +0 -0
  4. __pycache__/utils.cpython-311.pyc +0 -0
  5. auto-train/hmbench-hipe2020/de-hmteams/teams-base-historic-multilingual-discriminator-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1/loss.tsv +0 -0
  6. auto-train/hmbench-hipe2020/de-hmteams/teams-base-historic-multilingual-discriminator-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1/training.log +71 -0
  7. configs/ajmc/de/hmbert.json +11 -0
  8. configs/ajmc/de/hmteams.json +11 -0
  9. configs/ajmc/en/hmbert.json +11 -0
  10. configs/ajmc/en/hmteams.json +11 -0
  11. configs/ajmc/fr/hmbert.json +11 -0
  12. configs/ajmc/fr/hmteams.json +11 -0
  13. configs/ajmc/multi/hmbert.json +11 -0
  14. configs/ajmc/multi/hmteams.json +11 -0
  15. configs/hipe2020/de/hmbert.json +11 -0
  16. configs/hipe2020/de/hmteams.json +11 -0
  17. configs/icdar/fr/hmbert.json +11 -0
  18. configs/icdar/fr/hmteams.json +11 -0
  19. configs/icdar/multi/hmbert.json +11 -0
  20. configs/icdar/multi/hmteams.json +11 -0
  21. configs/icdar/nl/hmbert.json +11 -0
  22. configs/icdar/nl/hmteams.json +11 -0
  23. configs/letemps/fr/hmbert.json +11 -0
  24. configs/letemps/fr/hmteams.json +11 -0
  25. configs/newseye/de/hmbert.json +11 -0
  26. configs/newseye/de/hmteams.json +11 -0
  27. configs/newseye/fi/hmbert.json +11 -0
  28. configs/newseye/fi/hmteams.json +11 -0
  29. configs/newseye/fr/hmbert.json +11 -0
  30. configs/newseye/fr/hmteams.json +11 -0
  31. configs/newseye/multi/hmbert.json +11 -0
  32. configs/newseye/multi/hmteams.json +11 -0
  33. configs/newseye/sv/hmbert.json +11 -0
  34. configs/newseye/sv/hmteams.json +11 -0
  35. configs/topres19th/en/hmbert.json +11 -0
  36. configs/topres19th/en/hmteams.json +11 -0
  37. flair-fine-tuner.py +132 -0
  38. flair-log-parser.py +93 -0
  39. requirements.txt +1 -0
  40. script.py +48 -0
  41. utils.py +385 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NER Fine-Tuning
2
+
3
+ We use Flair for fine-tuning NER models on
4
+ [HIPE-2022](https://github.com/hipe-eval/HIPE-2022-data) datasets from
5
+ [HIPE-2022 Shared Task](https://hipe-eval.github.io/HIPE-2022/).
6
+
7
+ All models are fine-tuned on A10 (24GB) and A100 (40GB) instances from
8
+ [Lambda Cloud](https://lambdalabs.com/service/gpu-cloud) using Flair:
9
+
10
+ ```bash
11
+ $ git clone https://github.com/flairNLP/flair.git
12
+ $ cd flair && git checkout 419f13a05d6b36b2a42dd73a551dc3ba679f820c
13
+ $ pip3 install -e .
14
+ $ cd ..
15
+ ```
16
+
17
+ Clone this repo for fine-tuning NER models:
18
+
19
+ ```bash
20
+ $ git clone https://github.com/stefan-it/hmTEAMS.git
21
+ $ cd hmTEAMS/bench
22
+ ```
23
+
24
+ Authorize via Hugging Face CLI (needed because hmTEAMS is currently only available after approval):
25
+
26
+ ```bash
27
+ # Use access token from https://huggingface.co/settings/tokens
28
+ $ huggingface-cli login login
29
+ ```
30
+
31
+ We use a config-driven hyper-parameter search. The script [`flair-fine-tuner.py`](flair-fine-tuner.py) can be used to
32
+ fine-tune NER models from our Model Zoo.
33
+
34
+ # Benchmark
35
+
36
+ We test our pretrained language models on various datasets from HIPE-2020, HIPE-2022 and Europeana. The following table
37
+ shows an overview of used datasets.
38
+
39
+ | Language | Datasets
40
+ |----------|----------------------------------------------------|
41
+ | English | [AjMC] - [TopRes19th] |
42
+ | German | [AjMC] - [NewsEye] |
43
+ | French | [AjMC] - [ICDAR-Europeana] - [LeTemps] - [NewsEye] |
44
+ | Finnish | [NewsEye] |
45
+ | Swedish | [NewsEye] |
46
+ | Dutch | [ICDAR-Europeana] |
47
+
48
+ [AjMC]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md
49
+ [NewsEye]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md
50
+ [TopRes19th]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-topres19th.md
51
+ [ICDAR-Europeana]: https://github.com/stefan-it/historic-domain-adaptation-icdar
52
+ [LeTemps]: https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-letemps.md
53
+
54
+ # Results
55
+
56
+ We report averaged F1-score over 5 runs with different seeds on development set:
57
+
58
+ | Model | English AjMC | German AjMC | French AjMC | German NewsEye | French NewsEye | Finnish NewsEye | Swedish NewsEye | Dutch ICDAR | French ICDAR | French LeTemps | English TopRes19th | Avg. |
59
+ |---------------------------------------------------------------------------|--------------|--------------|--------------|----------------|----------------|-----------------|-----------------|--------------|--------------|----------------|--------------------|-----------|
60
+ | hmBERT (32k) [Schweter et al.](https://ceur-ws.org/Vol-3180/paper-87.pdf) | 85.36 ± 0.94 | 89.08 ± 0.09 | 85.10 ± 0.60 | 39.65 ± 1.01 | 81.47 ± 0.36 | 77.28 ± 0.37 | 82.85 ± 0.83 | 82.11 ± 0.61 | 77.21 ± 0.16 | 65.73 ± 0.56 | 80.94 ± 0.86 | 76.98 |
61
+ | hmTEAMS (Ours) | 86.41 ± 0.36 | 88.64 ± 0.42 | 85.41 ± 0.67 | 41.51 ± 2.82 | 83.20 ± 0.79 | 79.27 ± 1.88 | 82.78 ± 0.60 | 88.21 ± 0.39 | 78.03 ± 0.39 | 66.71 ± 0.46 | 81.36 ± 0.59 | **78.32** |
__pycache__/flair-fine-tuner.cpython-311.pyc ADDED
Binary file (5.85 kB). View file
 
__pycache__/flair-log-parser.cpython-311.pyc ADDED
Binary file (5.67 kB). View file
 
__pycache__/utils.cpython-311.pyc ADDED
Binary file (12.6 kB). View file
 
auto-train/hmbench-hipe2020/de-hmteams/teams-base-historic-multilingual-discriminator-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1/loss.tsv ADDED
File without changes
auto-train/hmbench-hipe2020/de-hmteams/teams-base-historic-multilingual-discriminator-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1/training.log ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2023-09-02 00:42:11,828 ----------------------------------------------------------------------------------------------------
2
+ 2023-09-02 00:42:11,829 Model: "SequenceTagger(
3
+ (embeddings): TransformerWordEmbeddings(
4
+ (model): ElectraModel(
5
+ (embeddings): ElectraEmbeddings(
6
+ (word_embeddings): Embedding(32001, 768)
7
+ (position_embeddings): Embedding(512, 768)
8
+ (token_type_embeddings): Embedding(2, 768)
9
+ (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
10
+ (dropout): Dropout(p=0.1, inplace=False)
11
+ )
12
+ (encoder): ElectraEncoder(
13
+ (layer): ModuleList(
14
+ (0-11): 12 x ElectraLayer(
15
+ (attention): ElectraAttention(
16
+ (self): ElectraSelfAttention(
17
+ (query): Linear(in_features=768, out_features=768, bias=True)
18
+ (key): Linear(in_features=768, out_features=768, bias=True)
19
+ (value): Linear(in_features=768, out_features=768, bias=True)
20
+ (dropout): Dropout(p=0.1, inplace=False)
21
+ )
22
+ (output): ElectraSelfOutput(
23
+ (dense): Linear(in_features=768, out_features=768, bias=True)
24
+ (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
25
+ (dropout): Dropout(p=0.1, inplace=False)
26
+ )
27
+ )
28
+ (intermediate): ElectraIntermediate(
29
+ (dense): Linear(in_features=768, out_features=3072, bias=True)
30
+ (intermediate_act_fn): GELUActivation()
31
+ )
32
+ (output): ElectraOutput(
33
+ (dense): Linear(in_features=3072, out_features=768, bias=True)
34
+ (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
35
+ (dropout): Dropout(p=0.1, inplace=False)
36
+ )
37
+ )
38
+ )
39
+ )
40
+ )
41
+ )
42
+ (locked_dropout): LockedDropout(p=0.5)
43
+ (linear): Linear(in_features=768, out_features=21, bias=True)
44
+ (loss_function): CrossEntropyLoss()
45
+ )"
46
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
47
+ 2023-09-02 00:42:11,829 MultiCorpus: 3575 train + 1235 dev + 1266 test sentences
48
+ - NER_HIPE_2022 Corpus: 3575 train + 1235 dev + 1266 test sentences - /home/stefan/.flair/datasets/ner_hipe_2022/v2.1/hipe2020/de/with_doc_seperator
49
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
50
+ 2023-09-02 00:42:11,829 Train: 3575 sentences
51
+ 2023-09-02 00:42:11,829 (train_with_dev=False, train_with_test=False)
52
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
53
+ 2023-09-02 00:42:11,829 Training Params:
54
+ 2023-09-02 00:42:11,829 - learning_rate: "3e-05"
55
+ 2023-09-02 00:42:11,829 - mini_batch_size: "8"
56
+ 2023-09-02 00:42:11,829 - max_epochs: "10"
57
+ 2023-09-02 00:42:11,829 - shuffle: "True"
58
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
59
+ 2023-09-02 00:42:11,829 Plugins:
60
+ 2023-09-02 00:42:11,829 - LinearScheduler | warmup_fraction: '0.1'
61
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
62
+ 2023-09-02 00:42:11,829 Final evaluation on model from best epoch (best-model.pt)
63
+ 2023-09-02 00:42:11,829 - metric: "('micro avg', 'f1-score')"
64
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
65
+ 2023-09-02 00:42:11,829 Computation:
66
+ 2023-09-02 00:42:11,829 - compute on device: cuda:0
67
+ 2023-09-02 00:42:11,829 - embedding storage: none
68
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
69
+ 2023-09-02 00:42:11,829 Model training base path: "hmbench-hipe2020/de-hmteams/teams-base-historic-multilingual-discriminator-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1"
70
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
71
+ 2023-09-02 00:42:11,829 ----------------------------------------------------------------------------------------------------
configs/ajmc/de/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/de"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/de/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/de"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/en/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/en"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/en/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/en"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/fr/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/fr"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/fr/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/fr"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/multi/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/de", "ajmc/en", "ajmc/fr"],
10
+ "cuda": "0"
11
+ }
configs/ajmc/multi/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["ajmc/de", "ajmc/en", "ajmc/fr"],
10
+ "cuda": "0"
11
+ }
configs/hipe2020/de/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["hipe2020/de"],
10
+ "cuda": "0"
11
+ }
configs/hipe2020/de/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["hipe2020/de"],
10
+ "cuda": "0"
11
+ }
configs/icdar/fr/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/fr"],
10
+ "cuda": "0"
11
+ }
configs/icdar/fr/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/fr"],
10
+ "cuda": "0"
11
+ }
configs/icdar/multi/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/fr", "icdar/nl"],
10
+ "cuda": "0"
11
+ }
configs/icdar/multi/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/fr", "icdar/nl"],
10
+ "cuda": "0"
11
+ }
configs/icdar/nl/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/nl"],
10
+ "cuda": "0"
11
+ }
configs/icdar/nl/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["icdar/nl"],
10
+ "cuda": "0"
11
+ }
configs/letemps/fr/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["letemps/fr"],
10
+ "cuda": "0"
11
+ }
configs/letemps/fr/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["letemps/fr"],
10
+ "cuda": "0"
11
+ }
configs/newseye/de/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/de"],
10
+ "cuda": "0"
11
+ }
configs/newseye/de/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/de"],
10
+ "cuda": "0"
11
+ }
configs/newseye/fi/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fi"],
10
+ "cuda": "0"
11
+ }
configs/newseye/fi/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fi"],
10
+ "cuda": "0"
11
+ }
configs/newseye/fr/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fr"],
10
+ "cuda": "0"
11
+ }
configs/newseye/fr/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [8, 4],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fr"],
10
+ "cuda": "0"
11
+ }
configs/newseye/multi/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fi", "newseye/sv"],
10
+ "cuda": "0"
11
+ }
configs/newseye/multi/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/fi", "newseye/sv"],
10
+ "cuda": "0"
11
+ }
configs/newseye/sv/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/sv"],
10
+ "cuda": "0"
11
+ }
configs/newseye/sv/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["newseye/sv"],
10
+ "cuda": "0"
11
+ }
configs/topres19th/en/hmbert.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "dbmdz/bert-base-historic-multilingual-cased",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["topres19th/en"],
10
+ "cuda": "0"
11
+ }
configs/topres19th/en/hmteams.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "seeds": [1,2,3,4,5],
3
+ "batch_sizes": [4, 8],
4
+ "hf_model": "hmteams/teams-base-historic-multilingual-discriminator",
5
+ "context_size": 0,
6
+ "epochs": [10],
7
+ "learning_rates": [3e-5, 5e-5],
8
+ "subword_poolings": ["first"],
9
+ "hipe_datasets": ["topres19th/en"],
10
+ "cuda": "0"
11
+ }
flair-fine-tuner.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import logging
3
+ import sys
4
+
5
+ import flair
6
+ import torch
7
+
8
+ from typing import List
9
+
10
+ from flair.data import MultiCorpus
11
+ from flair.datasets import ColumnCorpus, NER_HIPE_2022, NER_ICDAR_EUROPEANA
12
+ from flair.embeddings import (
13
+ TokenEmbeddings,
14
+ StackedEmbeddings,
15
+ TransformerWordEmbeddings
16
+ )
17
+ from flair import set_seed
18
+ from flair.models import SequenceTagger
19
+ from flair.trainers import ModelTrainer
20
+
21
+ from utils import prepare_ajmc_corpus, prepare_clef_2020_corpus, prepare_newseye_fi_sv_corpus, prepare_newseye_de_fr_corpus
22
+
23
+ logger = logging.getLogger("flair")
24
+ logger.setLevel(level="INFO")
25
+
26
+
27
+ def run_experiment(seed: int, batch_size: int, epoch: int, learning_rate: float, subword_pooling: str,
28
+ hipe_datasets: List[str], json_config: dict):
29
+ hf_model = json_config["hf_model"]
30
+ context_size = json_config["context_size"]
31
+ layers = json_config["layers"] if "layers" in json_config else "-1"
32
+ use_crf = json_config["use_crf"] if "use_crf" in json_config else False
33
+
34
+ # Set seed for reproducibility
35
+ set_seed(seed)
36
+
37
+ corpus_list = []
38
+
39
+ # Dataset-related
40
+ for dataset in hipe_datasets:
41
+ dataset_name, language = dataset.split("/")
42
+
43
+ # E.g. topres19th needs no special preprocessing
44
+ preproc_fn = None
45
+
46
+ if dataset_name == "ajmc":
47
+ preproc_fn = prepare_ajmc_corpus
48
+ elif dataset_name == "hipe2020":
49
+ preproc_fn = prepare_clef_2020_corpus
50
+ elif dataset_name == "newseye" and language in ["fi", "sv"]:
51
+ preproc_fn = prepare_newseye_fi_sv_corpus
52
+ elif dataset_name == "newseye" and language in ["de", "fr"]:
53
+ preproc_fn = prepare_newseye_de_fr_corpus
54
+
55
+ if dataset_name == "icdar":
56
+ corpus_list.append(NER_ICDAR_EUROPEANA(language=language))
57
+ else:
58
+ corpus_list.append(NER_HIPE_2022(dataset_name=dataset_name, language=language, preproc_fn=preproc_fn,
59
+ add_document_separator=True))
60
+
61
+ if context_size == 0:
62
+ context_size = False
63
+
64
+ logger.info("FLERT Context: {}".format(context_size))
65
+ logger.info("Layers: {}".format(layers))
66
+ logger.info("Use CRF: {}".format(use_crf))
67
+
68
+ corpora: MultiCorpus = MultiCorpus(corpora=corpus_list, sample_missing_splits=False)
69
+ label_dictionary = corpora.make_label_dictionary(label_type="ner")
70
+ logger.info("Label Dictionary: {}".format(label_dictionary.get_items()))
71
+
72
+ embeddings = TransformerWordEmbeddings(
73
+ model=hf_model,
74
+ layers=layers,
75
+ subtoken_pooling=subword_pooling,
76
+ fine_tune=True,
77
+ use_context=context_size,
78
+ )
79
+
80
+ tagger: SequenceTagger = SequenceTagger(
81
+ hidden_size=256,
82
+ embeddings=embeddings,
83
+ tag_dictionary=label_dictionary,
84
+ tag_type="ner",
85
+ use_crf=use_crf,
86
+ use_rnn=False,
87
+ reproject_embeddings=False,
88
+ )
89
+
90
+ # Trainer
91
+ trainer: ModelTrainer = ModelTrainer(tagger, corpora)
92
+
93
+ datasets = "-".join([dataset for dataset in hipe_datasets])
94
+
95
+ trainer.fine_tune(
96
+ f"hmbench-{datasets}-{hf_model}-bs{batch_size}-ws{context_size}-e{epoch}-lr{learning_rate}-pooling{subword_pooling}-layers{layers}-crf{use_crf}-{seed}",
97
+ learning_rate=learning_rate,
98
+ mini_batch_size=batch_size,
99
+ max_epochs=epoch,
100
+ shuffle=True,
101
+ embeddings_storage_mode='none',
102
+ weight_decay=0.,
103
+ use_final_model_for_eval=False,
104
+ )
105
+
106
+ # Finally, print model card for information
107
+ tagger.print_model_card()
108
+
109
+
110
+ if __name__ == "__main__":
111
+ filename = sys.argv[1]
112
+ with open(filename, "rt") as f_p:
113
+ json_config = json.load(f_p)
114
+
115
+ seeds = json_config["seeds"]
116
+ batch_sizes = json_config["batch_sizes"]
117
+ epochs = json_config["epochs"]
118
+ learning_rates = json_config["learning_rates"]
119
+ subword_poolings = json_config["subword_poolings"]
120
+
121
+ hipe_datasets = json_config["hipe_datasets"] # Do not iterate over them
122
+
123
+ cuda = json_config["cuda"]
124
+ flair.device = f'cuda:{cuda}'
125
+
126
+ for seed in seeds:
127
+ for batch_size in batch_sizes:
128
+ for epoch in epochs:
129
+ for learning_rate in learning_rates:
130
+ for subword_pooling in subword_poolings:
131
+ run_experiment(seed, batch_size, epoch, learning_rate, subword_pooling, hipe_datasets,
132
+ json_config) # pylint: disable=no-value-for-parameter
flair-log-parser.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import sys
3
+ import numpy as np
4
+
5
+ from collections import defaultdict
6
+ from pathlib import Path
7
+ from tabulate import tabulate
8
+
9
+ # pattern = "bert-tiny-historic-multilingual-cased-*" # sys.argv[1]
10
+ pattern = sys.argv[1]
11
+
12
+ log_dirs = Path("./").rglob(f"{pattern}")
13
+
14
+ dev_results = defaultdict(list)
15
+ test_results = defaultdict(list)
16
+
17
+ for log_dir in log_dirs:
18
+ training_log = log_dir / "training.log"
19
+
20
+ if not training_log.exists():
21
+ print(f"No training.log found in {log_dir}")
22
+
23
+ matches = re.match(".*(bs.*?)-(ws.*?)-(e.*?)-(lr.*?)-layers-1-crfFalse-(\d+)", str(log_dir))
24
+
25
+ batch_size = matches.group(1)
26
+ ws = matches.group(2)
27
+ epochs = matches.group(3)
28
+ lr = matches.group(4)
29
+ seed = matches.group(5)
30
+
31
+ result_identifier = f"{ws}-{batch_size}-{epochs}-{lr}"
32
+
33
+ with open(training_log, "rt") as f_p:
34
+ all_dev_results = []
35
+ for line in f_p:
36
+ line = line.rstrip()
37
+
38
+ if "f1-score (micro avg)" in line:
39
+ dev_result = line.split(" ")[-1]
40
+ all_dev_results.append(dev_result)
41
+ # dev_results[result_identifier].append(dev_result)
42
+
43
+ if "F-score (micro" in line:
44
+ test_result = line.split(" ")[-1]
45
+ test_results[result_identifier].append(test_result)
46
+
47
+ best_dev_result = max([float(value) for value in all_dev_results])
48
+ dev_results[result_identifier].append(best_dev_result)
49
+
50
+ mean_dev_results = {}
51
+
52
+ print("Debug:", dev_results)
53
+
54
+ for dev_result in dev_results.items():
55
+ result_identifier, results = dev_result
56
+
57
+ mean_result = np.mean([float(value) for value in results])
58
+
59
+ mean_dev_results[result_identifier] = mean_result
60
+
61
+ print("Averaged Development Results:")
62
+
63
+ sorted_mean_dev_results = dict(sorted(mean_dev_results.items(), key=lambda item: item[1], reverse=True))
64
+
65
+ for mean_dev_config, score in sorted_mean_dev_results.items():
66
+ print(f"{mean_dev_config} : {round(score * 100, 2)}")
67
+
68
+ best_dev_configuration = max(mean_dev_results, key=mean_dev_results.get)
69
+
70
+ print("Markdown table:")
71
+
72
+ print("")
73
+
74
+ print("Best configuration:", best_dev_configuration)
75
+
76
+ print("\n")
77
+
78
+ print("Best Development Score:",
79
+ round(mean_dev_results[best_dev_configuration] * 100, 2))
80
+
81
+ print("\n")
82
+
83
+ header = ["Configuration"] + [f"Run {i + 1}" for i in range(len(dev_results[best_dev_configuration]))] + ["Avg."]
84
+
85
+ table = []
86
+
87
+ for mean_dev_config, score in sorted_mean_dev_results.items():
88
+ current_std = np.std(dev_results[mean_dev_config])
89
+ current_row = [f"`{mean_dev_config}`", *[round(res * 100, 2) for res in dev_results[mean_dev_config]],
90
+ f"{round(score * 100, 2)} ± {round(current_std * 100, 2)}"]
91
+ table.append(current_row)
92
+
93
+ print(tabulate(table, headers=header, tablefmt="github") + "\n")
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ git+https://github.com/flairNLP/flair.git@419f13a05d6b36b2a42dd73a551dc3ba679f820c
script.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Expected environment variables:
2
+ # CONFIG: points to *.json configuration file
3
+ # HF_TOKEN: HF access token from https://huggingface.co/settings/tokens
4
+ # REPO_NAME: name of HF datasets repo
5
+
6
+
7
+ import os
8
+ import flair
9
+ import json
10
+ import importlib
11
+
12
+ from huggingface_hub import login, HfApi
13
+
14
+ fine_tuner = importlib.import_module("flair-fine-tuner")
15
+
16
+ config_file = os.environ.get("CONFIG")
17
+ hf_token = os.environ.get("HF_TOKEN")
18
+ repo_name = os.environ.get("REPO_NAME")
19
+
20
+ login(token=hf_token, add_to_git_credential=True)
21
+ api = HfApi()
22
+
23
+ with open(config_file, "rt") as f_p:
24
+ json_config = json.load(f_p)
25
+
26
+ seeds = json_config["seeds"]
27
+ batch_sizes = json_config["batch_sizes"]
28
+ epochs = json_config["epochs"]
29
+ learning_rates = json_config["learning_rates"]
30
+ subword_poolings = json_config["subword_poolings"]
31
+
32
+ hipe_datasets = json_config["hipe_datasets"] # Do not iterate over them
33
+
34
+ cuda = json_config["cuda"]
35
+ flair.device = f'cuda:{cuda}'
36
+
37
+ for seed in seeds:
38
+ for batch_size in batch_sizes:
39
+ for epoch in epochs:
40
+ for learning_rate in learning_rates:
41
+ for subword_pooling in subword_poolings:
42
+ fine_tuner.run_experiment(seed, batch_size, epoch, learning_rate, subword_pooling, hipe_datasets, json_config)
43
+ api.upload_folder(
44
+ folder_path="./",
45
+ path_in_repo="./", # Upload to a specific folder
46
+ repo_id=repo_name,
47
+ repo_type="dataset",
48
+ )
utils.py ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flair.data import Sentence
2
+ from flair.embeddings import TransformerWordEmbeddings
3
+
4
+ from pathlib import Path
5
+
6
+ from typing import List
7
+
8
+
9
+ def prepare_ajmc_corpus(
10
+ file_in: Path, file_out: Path, eos_marker: str, document_separator: str, add_document_separator: bool
11
+ ):
12
+ with open(file_in, "rt") as f_p:
13
+ lines = f_p.readlines()
14
+
15
+ with open(file_out, "wt") as f_out:
16
+ # Add missing newline after header
17
+ f_out.write(lines[0] + "\n")
18
+
19
+ for line in lines[1:]:
20
+ if line.startswith(" \t"):
21
+ # Workaround for empty tokens
22
+ continue
23
+
24
+ line = line.strip()
25
+
26
+ # HIPE-2022 late pre-submission fix:
27
+ # Our hmBERT model has never seen Fraktur, so we replace long s
28
+ line = line.replace("ſ", "s")
29
+
30
+ # Add "real" document marker
31
+ if add_document_separator and line.startswith(document_separator):
32
+ f_out.write("-DOCSTART- O\n\n")
33
+
34
+ f_out.write(line + "\n")
35
+
36
+ if eos_marker in line:
37
+ f_out.write("\n")
38
+
39
+ print("Special preprocessing for AJMC has finished!")
40
+
41
+
42
+ def prepare_clef_2020_corpus(
43
+ file_in: Path, file_out: Path, eos_marker: str, document_separator: str, add_document_separator: bool
44
+ ):
45
+ with open(file_in, "rt") as f_p:
46
+ original_lines = f_p.readlines()
47
+
48
+ lines = []
49
+
50
+ # Add missing newline after header
51
+ lines.append(original_lines[0])
52
+
53
+ for line in original_lines[1:]:
54
+ if line.startswith(" \t"):
55
+ # Workaround for empty tokens
56
+ continue
57
+
58
+ line = line.strip()
59
+
60
+ # Add "real" document marker
61
+ if add_document_separator and line.startswith(document_separator):
62
+ lines.append("-DOCSTART- O")
63
+ lines.append("")
64
+
65
+ lines.append(line)
66
+
67
+ if eos_marker in line:
68
+ lines.append("")
69
+
70
+ # Now here comes the de-hyphenation part ;)
71
+ word_seperator = "¬"
72
+
73
+ for index, line in enumerate(lines):
74
+ if line.startswith("#"):
75
+ continue
76
+
77
+ if line.startswith(word_seperator):
78
+ continue
79
+
80
+ if not line:
81
+ continue
82
+
83
+ prev_line = lines[index - 1]
84
+
85
+ prev_prev_line = lines[index - 2]
86
+
87
+ if not prev_line.startswith(word_seperator):
88
+ continue
89
+
90
+ # Example:
91
+ # Po <- prev_prev_line
92
+ # ¬ <- prev_line
93
+ # len <- current_line
94
+ #
95
+ # will be de-hyphenated to:
96
+ #
97
+ # Polen Dehyphenated-3
98
+ # # ¬
99
+ # # len
100
+ suffix = line.split("\t")[0]
101
+
102
+ prev_prev_line_splitted = lines[index - 2].split("\t")
103
+ prev_prev_line_splitted[0] += suffix
104
+
105
+ prev_line_splitted = lines[index - 1].split("\t")
106
+ prev_line_splitted[0] = "#" + prev_line_splitted[0]
107
+ prev_line_splitted[-1] += "|Commented"
108
+
109
+ current_line_splitted = line.split("\t")
110
+ current_line_splitted[0] = "#" + current_line_splitted[0]
111
+ current_line_splitted[-1] += "|Commented"
112
+
113
+ # Add some meta information about suffix length
114
+ # Later, it is possible to re-construct original token and suffix
115
+ prev_prev_line_splitted[9] += f"|Dehyphenated-{len(suffix)}"
116
+
117
+ lines[index - 2] = "\t".join(prev_prev_line_splitted)
118
+ lines[index - 1] = "\t".join(prev_line_splitted)
119
+ lines[index] = "\t".join(current_line_splitted)
120
+
121
+ # Post-Processing I
122
+ for index, line in enumerate(lines):
123
+ if not line:
124
+ continue
125
+
126
+ if not line.startswith(word_seperator):
127
+ continue
128
+
129
+ # oh noooo
130
+ current_line_splitted = line.split("\t")
131
+ current_line_splitted[0] = "#" + current_line_splitted[0]
132
+
133
+ current_line_splitted[-1] += "|Commented"
134
+
135
+ lines[index] = "\t".join(current_line_splitted)
136
+
137
+ # Post-Processing II
138
+ # Beautify: _|Commented –> Commented
139
+ for index, line in enumerate(lines):
140
+ if not line:
141
+ continue
142
+
143
+ if not line.startswith("#"):
144
+ continue
145
+
146
+ current_line_splitted = line.split("\t")
147
+
148
+ if current_line_splitted[-1] == "_|Commented":
149
+ current_line_splitted[-1] = "Commented"
150
+ lines[index] = "\t".join(current_line_splitted)
151
+
152
+ # Finally, save it!
153
+ with open(file_out, "wt") as f_out:
154
+ for line in lines:
155
+ f_out.write(line + "\n")
156
+
157
+
158
+ def prepare_newseye_fi_sv_corpus(
159
+ file_in: Path, file_out: Path, eos_marker: str, document_separator: str, add_document_separator: bool
160
+ ):
161
+ with open(file_in, "rt") as f_p:
162
+ original_lines = f_p.readlines()
163
+
164
+ lines = []
165
+
166
+ # Add missing newline after header
167
+ lines.append(original_lines[0])
168
+
169
+ for line in original_lines[1:]:
170
+ if line.startswith(" \t"):
171
+ # Workaround for empty tokens
172
+ continue
173
+
174
+ line = line.strip()
175
+
176
+ # Add "real" document marker
177
+ if add_document_separator and line.startswith(document_separator):
178
+ lines.append("-DOCSTART- O")
179
+ lines.append("")
180
+
181
+ lines.append(line)
182
+
183
+ if eos_marker in line:
184
+ lines.append("")
185
+
186
+ # Now here comes the de-hyphenation part
187
+ # And we want to avoid matching "-DOCSTART-" lines here, so append a tab
188
+ word_seperator = "-\t"
189
+
190
+ for index, line in enumerate(lines):
191
+ if line.startswith("#"):
192
+ continue
193
+
194
+ if line.startswith(word_seperator):
195
+ continue
196
+
197
+ if not line:
198
+ continue
199
+
200
+ prev_line = lines[index - 1]
201
+
202
+ prev_prev_line = lines[index - 2]
203
+
204
+ if not prev_line.startswith(word_seperator):
205
+ continue
206
+
207
+ # Example:
208
+ # Po NoSpaceAfter <- prev_prev_line
209
+ # - <- prev_line
210
+ # len <- current_line
211
+ #
212
+ # will be de-hyphenated to:
213
+ #
214
+ # Polen Dehyphenated-3
215
+ # # -
216
+ # # len
217
+ #
218
+ # It is really important, that "NoSpaceAfter" in the previous
219
+ # line before hyphenation character! Otherwise, it is no real
220
+ # hyphenation!
221
+
222
+ if not "NoSpaceAfter" in prev_line:
223
+ continue
224
+
225
+ if not prev_prev_line:
226
+ continue
227
+
228
+ suffix = line.split("\t")[0]
229
+
230
+ prev_prev_line_splitted = lines[index - 2].split("\t")
231
+ prev_prev_line_splitted[0] += suffix
232
+
233
+ prev_line_splitted = lines[index - 1].split("\t")
234
+ prev_line_splitted[0] = "# " + prev_line_splitted[0]
235
+ prev_line_splitted[-1] += "|Commented"
236
+
237
+ current_line_splitted = line.split("\t")
238
+ current_line_splitted[0] = "# " + current_line_splitted[0]
239
+ current_line_splitted[-1] += "|Commented"
240
+
241
+ # Add some meta information about suffix length
242
+ # Later, it is possible to re-construct original token and suffix
243
+ prev_prev_line_splitted[9] += f"|Dehyphenated-{len(suffix)}"
244
+
245
+ lines[index - 2] = "\t".join(prev_prev_line_splitted)
246
+ lines[index - 1] = "\t".join(prev_line_splitted)
247
+ lines[index] = "\t".join(current_line_splitted)
248
+
249
+ # Post-Processing I
250
+ for index, line in enumerate(lines):
251
+ if not line:
252
+ continue
253
+
254
+ if not line.startswith(word_seperator):
255
+ continue
256
+
257
+ # oh noooo
258
+ current_line_splitted = line.split("\t")
259
+ current_line_splitted[0] = "# " + current_line_splitted[0]
260
+
261
+ current_line_splitted[-1] += "|Commented"
262
+
263
+ lines[index] = "\t".join(current_line_splitted)
264
+
265
+ # Post-Processing II
266
+ # Beautify: _|Commented –> Commented
267
+ for index, line in enumerate(lines):
268
+ if not line:
269
+ continue
270
+
271
+ if not line.startswith("#"):
272
+ continue
273
+
274
+ current_line_splitted = line.split("\t")
275
+
276
+ if current_line_splitted[-1] == "_|Commented":
277
+ current_line_splitted[-1] = "Commented"
278
+ lines[index] = "\t".join(current_line_splitted)
279
+
280
+ # Finally, save it!
281
+ with open(file_out, "wt") as f_out:
282
+ for line in lines:
283
+ f_out.write(line + "\n")
284
+
285
+
286
+ def prepare_newseye_de_fr_corpus(
287
+ file_in: Path, file_out: Path, eos_marker: str, document_separator: str, add_document_separator: bool
288
+ ):
289
+ with open(file_in, "rt") as f_p:
290
+ original_lines = f_p.readlines()
291
+
292
+ lines = []
293
+
294
+ # Add missing newline after header
295
+ lines.append(original_lines[0])
296
+
297
+ for line in original_lines[1:]:
298
+ if line.startswith(" \t"):
299
+ # Workaround for empty tokens
300
+ continue
301
+
302
+ line = line.strip()
303
+
304
+ # Add "real" document marker
305
+ if add_document_separator and line.startswith(document_separator):
306
+ lines.append("-DOCSTART- O")
307
+ lines.append("")
308
+
309
+ lines.append(line)
310
+
311
+ if eos_marker in line:
312
+ lines.append("")
313
+
314
+ # Now here comes the de-hyphenation part ;)
315
+ word_seperator = "¬"
316
+
317
+ for index, line in enumerate(lines):
318
+ if line.startswith("#"):
319
+ continue
320
+
321
+ if not line:
322
+ continue
323
+
324
+ last_line = lines[index - 1]
325
+ last_line_splitted = last_line.split("\t")
326
+
327
+ if not last_line_splitted[0].endswith(word_seperator):
328
+ continue
329
+
330
+ # The following example
331
+ #
332
+ # den O O O null null SpaceAfter
333
+ # Ver¬ B-LOC O O null n <- last_line
334
+ # einigten I-LOC O O null n SpaceAfter <- current_line
335
+ # Staaten I-LOC O O null n
336
+ # . O O O null null
337
+ #
338
+ # will be transformed to:
339
+ #
340
+ # den O O O null null SpaceAfter
341
+ # Vereinigten B-LOC O O null n |Normalized-8
342
+ # #einigten I-LOC O O null n SpaceAfter|Commented
343
+ # Staaten I-LOC O O null n
344
+ # . O O O null null
345
+
346
+ suffix = last_line.split("\t")[0].replace(word_seperator, "") # Will be "Ver"
347
+
348
+ prefix_length = len(line.split("\t")[0])
349
+
350
+ # Override last_line:
351
+ # Ver¬ will be transformed to Vereinigten with normalized information at the end
352
+
353
+ last_line_splitted[0] = suffix + line.split("\t")[0]
354
+
355
+ last_line_splitted[9] += f"|Dehyphenated-{prefix_length}"
356
+
357
+ current_line_splitted = line.split("\t")
358
+ current_line_splitted[0] = "# " + current_line_splitted[0]
359
+ current_line_splitted[-1] += "|Commented"
360
+
361
+ lines[index - 1] = "\t".join(last_line_splitted)
362
+ lines[index] = "\t".join(current_line_splitted)
363
+
364
+ # Post-Processing I
365
+ # Beautify: _|Commented –> Commented
366
+ for index, line in enumerate(lines):
367
+ if not line:
368
+ continue
369
+
370
+ if not line.startswith("#"):
371
+ continue
372
+
373
+ current_line_splitted = line.split("\t")
374
+
375
+ if current_line_splitted[-1] == "_|Commented":
376
+ current_line_splitted[-1] = "Commented"
377
+ lines[index] = "\t".join(current_line_splitted)
378
+
379
+ # Finally, save it!
380
+ with open(file_out, "wt") as f_out:
381
+ for line in lines:
382
+ f_out.write(line + "\n")
383
+
384
+ print("Special preprocessing for German/French NewsEye dataset has finished!")
385
+