Update README.md
Browse files
README.md
CHANGED
@@ -10,24 +10,29 @@ metrics:
|
|
10 |
---
|
11 |
# Overview
|
12 |
![Scoris logo](https://scoris.lt/logo_smaller.png)
|
13 |
-
This is an English-Lithuanian translation model
|
14 |
-
|
15 |
-
For Lithuanian-English translation check another model [scoris-mt-lt-en](https://huggingface.co/scoris/scoris-mt-lt-en)
|
16 |
|
|
|
17 |
|
18 |
Fine-tuned on large merged data set: [scoris/en-lt-merged-data](https://huggingface.co/datasets/scoris/en-lt-merged-data) (5.4 million sentence pairs)
|
19 |
|
|
|
|
|
|
|
20 |
Trained on 6 epochs.
|
21 |
|
22 |
Made by [Scoris](https://scoris.lt) team
|
23 |
|
24 |
# Evaluation:
|
25 |
-
Tested on scoris/en-lt-merged-data validation set.
|
26 |
-
|
27 |
-
|
|
28 |
-
|
29 |
-
| scoris/scoris-mt-en-lt
|
30 |
-
| Helsinki-NLP/opus-mt-tc-big-en-lt |
|
|
|
|
|
|
|
31 |
|
32 |
According to [Google](https://cloud.google.com/translate/automl/docs/evaluate) BLEU score interpretation is following:
|
33 |
|
|
|
10 |
---
|
11 |
# Overview
|
12 |
![Scoris logo](https://scoris.lt/logo_smaller.png)
|
13 |
+
This is an English-Lithuanian translation model
|
|
|
|
|
14 |
|
15 |
+
Original model: [Helsinki-NLP/opus-mt-tc-big-en-lt](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-lt)
|
16 |
|
17 |
Fine-tuned on large merged data set: [scoris/en-lt-merged-data](https://huggingface.co/datasets/scoris/en-lt-merged-data) (5.4 million sentence pairs)
|
18 |
|
19 |
+
For Lithuanian-English translation check another model [scoris-mt-lt-en](https://huggingface.co/scoris/scoris-mt-lt-en)
|
20 |
+
|
21 |
+
|
22 |
Trained on 6 epochs.
|
23 |
|
24 |
Made by [Scoris](https://scoris.lt) team
|
25 |
|
26 |
# Evaluation:
|
27 |
+
Tested on scoris/en-lt-merged-data validation set.
|
28 |
+
|
29 |
+
| EN-LT | BLEU | ROUGE | chrF/chrF++ |
|
30 |
+
|-----------------------------------|------|-------|-------------|
|
31 |
+
| scoris/scoris-mt-en-lt | 41.9 | 0.54 | 0.63 |
|
32 |
+
| Helsinki-NLP/opus-mt-tc-big-en-lt | 34.3 | 0.50 | 0.60 |
|
33 |
+
| Google Translate | 27.1 | 0.48 | 0.60 |
|
34 |
+
| Deepl | 28.3 | 0.50 | 0.61 |
|
35 |
+
_Google and Deepl evaluated using random sample of 1000 sentence pairs._
|
36 |
|
37 |
According to [Google](https://cloud.google.com/translate/automl/docs/evaluate) BLEU score interpretation is following:
|
38 |
|