readme: fix link reference for ByT5 embedding implementation
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `s
|
|
27 |
|
28 |
# ⚠️ Inference Widget ⚠️
|
29 |
|
30 |
-
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][
|
31 |
|
32 |
This class needs to be present when running the model with Flair.
|
33 |
|
@@ -35,8 +35,8 @@ Thus, the inference widget is not working with hmByT5 at the moment on the Model
|
|
35 |
|
36 |
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
|
37 |
|
38 |
-
[
|
39 |
-
|
40 |
# Results
|
41 |
|
42 |
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
|
@@ -53,28 +53,28 @@ And report micro F1-score on development set:
|
|
53 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
|
54 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
|
55 |
|
56 |
-
[1]: https://hf.co/
|
57 |
-
[2]: https://hf.co/
|
58 |
-
[3]: https://hf.co/
|
59 |
-
[4]: https://hf.co/
|
60 |
-
[5]: https://hf.co/
|
61 |
-
[6]: https://hf.co/
|
62 |
-
[7]: https://hf.co/
|
63 |
-
[8]: https://hf.co/
|
64 |
-
[9]: https://hf.co/
|
65 |
-
[10]: https://hf.co/
|
66 |
-
[11]: https://hf.co/
|
67 |
-
[12]: https://hf.co/
|
68 |
-
[13]: https://hf.co/
|
69 |
-
[14]: https://hf.co/
|
70 |
-
[15]: https://hf.co/
|
71 |
-
[16]: https://hf.co/
|
72 |
-
[17]: https://hf.co/
|
73 |
-
[18]: https://hf.co/
|
74 |
-
[19]: https://hf.co/
|
75 |
-
[20]: https://hf.co/
|
76 |
-
|
77 |
-
The [training log](training.log) and TensorBoard logs are also uploaded to the model hub.
|
78 |
|
79 |
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
|
80 |
|
|
|
27 |
|
28 |
# ⚠️ Inference Widget ⚠️
|
29 |
|
30 |
+
Fine-Tuning ByT5 models in Flair is currently done by implementing an own [`ByT5Embedding`][0] class.
|
31 |
|
32 |
This class needs to be present when running the model with Flair.
|
33 |
|
|
|
35 |
|
36 |
This should be fixed in future, when ByT5 fine-tuning is supported in Flair directly.
|
37 |
|
38 |
+
[0]: https://github.com/stefan-it/hmBench/blob/main/byt5_embeddings.py
|
39 |
+
|
40 |
# Results
|
41 |
|
42 |
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
|
|
|
53 |
| bs8-e10-lr0.00016 | [0.8602][11] | [0.8684][12] | [0.8643][13] | [0.8643][14] | [0.8623][15] | 86.39 ± 0.27 |
|
54 |
| bs8-e10-lr0.00015 | [0.8551][16] | [0.8707][17] | [0.8599][18] | [0.8609][19] | [0.8612][20] | 86.16 ± 0.51 |
|
55 |
|
56 |
+
[1]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
|
57 |
+
[2]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
58 |
+
[3]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
59 |
+
[4]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
60 |
+
[5]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
|
61 |
+
[6]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
62 |
+
[7]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
63 |
+
[8]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
64 |
+
[9]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
65 |
+
[10]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs4-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
66 |
+
[11]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-1
|
67 |
+
[12]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-2
|
68 |
+
[13]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-3
|
69 |
+
[14]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-4
|
70 |
+
[15]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00016-poolingfirst-layers-1-crfFalse-5
|
71 |
+
[16]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-1
|
72 |
+
[17]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-2
|
73 |
+
[18]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-3
|
74 |
+
[19]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-4
|
75 |
+
[20]: https://hf.co/stefan-it/hmbench-ajmc-de-hmbyt5-bs8-wsFalse-e10-lr0.00015-poolingfirst-layers-1-crfFalse-5
|
76 |
+
|
77 |
+
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
|
78 |
|
79 |
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
|
80 |
|