readme: add initial version
Browse files
README.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- nl
|
5 |
+
---
|
6 |
+
|
7 |
+
# hmByT5 - Preliminary Language Models
|
8 |
+
|
9 |
+
Preliminary Historic Multilingual and Monolingual ByT5 Models. Following languages are currently covered:
|
10 |
+
|
11 |
+
* Dutch (Delpher Corpus)
|
12 |
+
|
13 |
+
More details can be found in [our GitHub repository](https://github.com/stefan-it/hmByT5).
|
14 |
+
|
15 |
+
# Pretraining
|
16 |
+
|
17 |
+
We use the official JAX/FLAX example in Hugging Face Transformers to pretrain a ByT5 model on a single v3-8 TPU.
|
18 |
+
Details about the training can be found [here](https://github.com/stefan-it/hmByT5/tree/main/hmbyt5-flax).
|
19 |
+
|
20 |
+
# Evaluation on Downstream Tasks (NER)
|
21 |
+
|
22 |
+
We evaluated the hmByT5 model on ICDAR Europeana dataset:
|
23 |
+
|
24 |
+
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|
25 |
+
|------------------------------------------|-------|-------|-------|-------|-------|--------------|
|
26 |
+
| `wsFalse-bs4-e10-lr0.00015-poolingfirst` | 88.02 | 88.71 | 87.17 | 87 | 88.62 | 87.9 ± 0.71 |
|
27 |
+
| `wsFalse-bs8-e10-lr0.00015-poolingfirst` | 87.1 | 86.72 | 87.15 | 88.29 | 87.35 | 87.32 ± 0.53 |
|
28 |
+
| `wsFalse-bs8-e10-lr0.00016-poolingfirst` | 87.23 | 87.19 | 87.11 | 87.62 | 87.11 | 87.25 ± 0.19 |
|
29 |
+
| `wsFalse-bs4-e10-lr0.00016-poolingfirst` | 85.98 | 87.5 | 84.22 | 87.08 | 86.48 | 86.25 ± 1.14 |
|
30 |
+
|
31 |
+
# Acknowledgements
|
32 |
+
|
33 |
+
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
|
34 |
+
Many Thanks for providing access to the TPUs ❤️
|