RVN commited on
Commit
84ab50f
1 Parent(s): 90ef31b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -30,18 +30,18 @@ For training, we used all Maltese data that was present in the [MaCoCu](https://
30
 
31
  # Benchmark performance
32
 
33
- We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu) and [mBERTu](https://huggingface.co/MLRS/mBERTu). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
34
-
35
- Scores are averages of three runs. We use the same hyperparameter settings for all models.
36
-
37
- | | **UPOS** | **UPOS** | **XPOS** | **XPOS** |
38
- |-----------------|:--------:|:--------:|:--------:|:--------:|
39
- | | **Dev** | **Test** | **Dev** | **Test** |
40
- | **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 |
41
- | **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 |
42
- | **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 |
43
- | **mBERTu** | 97.7 | 97.8 | 97.9 | 98.1 |
44
- | **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 |
45
 
46
  # Acknowledgements
47
 
30
 
31
  # Benchmark performance
32
 
33
+ We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. Moreover, we test on a Google Translated version of the COPA data set (see our [Github repo](https://github.com/RikVN/COPA) for details). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu), [mBERTu](https://huggingface.co/MLRS/mBERTu) and our own [MaltBERTa](https://huggingface.co/RVN/MaltBERTa). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels).
34
+
35
+ Scores are averages of three runs for UPOS/XPOS and 10 runs for COPA. We use the same hyperparameter settings for all models for UPOS/XPOS, while for COPA we optimize on the dev set.
36
+
37
+ | | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **COPA** |
38
+ |-----------------|:--------:|:--------:|:--------:|:--------:| :--------:|
39
+ | | **Dev** | **Test** | **Dev** | **Test** | **Test** |
40
+ | **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 | 52.2 |
41
+ | **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 | 54.0 |
42
+ | **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 | **55.6** |
43
+ | **mBERTu** | **97.7** | 97.8 | 97.9 | 98.1 | 52.6 |
44
+ | **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 | 53.7 |
45
 
46
  # Acknowledgements
47