Update README.md
Browse files
README.md
CHANGED
@@ -98,16 +98,17 @@ Training code was from the Google's Jax/Flax based [t5x framework](https://githu
|
|
98 |
|
99 |
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
|
100 |
|
101 |
-
When fine-tuned on those datasets, this model (the
|
102 |
|
103 |
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|
104 |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|
105 |
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|
106 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|
|
|
107 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|
108 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|
109 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|
110 |
-
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |
|
111 |
|
112 |
|
113 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
|
|
|
98 |
|
99 |
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
|
100 |
|
101 |
+
When fine-tuned on those datasets, this model (the seventh row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
|
102 |
|
103 |
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|
104 |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|
105 |
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|
106 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|
107 |
+
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|
108 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|
109 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|
110 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|
111 |
+
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
|
112 |
|
113 |
|
114 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
|