update model card
Browse files
README.md
CHANGED
@@ -122,9 +122,9 @@ We use the BLEU score for evaluation on the Flores test set: [Flores-101](https:
|
|
122 |
|
123 |
### Evaluation results
|
124 |
|
125 |
-
Below are the evaluation results on the machine translation from
|
126 |
|
127 |
-
| Test set | SoftCatalà | Google Translate |mt-
|
128 |
|----------------------|------------|------------------|---------------|
|
129 |
| Flores 101 dev | 24,3 | **28,5** | 26,1 |
|
130 |
| Flores 101 devtest |24,7 | **29,1** | 26,3 |
|
|
|
122 |
|
123 |
### Evaluation results
|
124 |
|
125 |
+
Below are the evaluation results on the machine translation from Catalan to Italian compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es):
|
126 |
|
127 |
+
| Test set | SoftCatalà | Google Translate |mt-aina-ca-it|
|
128 |
|----------------------|------------|------------------|---------------|
|
129 |
| Flores 101 dev | 24,3 | **28,5** | 26,1 |
|
130 |
| Flores 101 devtest |24,7 | **29,1** | 26,3 |
|