zouharvi commited on
Commit
78a7f2f
1 Parent(s): da3c19d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -34
README.md CHANGED
@@ -17,7 +17,8 @@ size_categories:
17
  - 1K<n<10K
18
  ---
19
 
20
- This is a repository for two papers: **Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]** - effect of reference quality and quantity on automatic metric performance, and **Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)** - creation of the data and human aspects of annotation and translation.
 
21
 
22
  # Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]
23
 
@@ -35,19 +36,6 @@ Cite [this paper](https://arxiv.org/abs/2401.01283) as:
35
  }
36
  ```
37
 
38
- ## Results
39
-
40
- Higher quality translation lead to better segment-level correlations. Very high quality translations (R4, which come from translatologists) contain translation shifts and are not the best as references.
41
- Using up to 7 references per segment helps.
42
-
43
- <img src="https://github.com/ufal/optimal-reference-translations/assets/7661193/d4cf2669-b2d8-40a3-9193-b1e8811090f2" width="48%">
44
- <img src="https://github.com/ufal/optimal-reference-translations/assets/7661193/c660daaa-ffd2-4229-8084-309e4db2b89f" width="48%">
45
-
46
- A heuristic-based algorithm can select which references to invest in. It is controlled by a hyperparameter which balances between quality and quantity.
47
-
48
- <img src="https://github.com/ufal/optimal-reference-translations/assets/7661193/53e27e2e-57b6-4aa8-ae52-74f6adc649de" width="48%">
49
- <img src="https://github.com/ufal/optimal-reference-translations/assets/7661193/d5579fea-946c-4056-b4d6-ccdb8cefa3cb" width="48%">
50
-
51
  # Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)
52
 
53
  > **Abstract:** The overall translation quality reached by current machine translation (MT) systems for high-resourced language pairs is remarkably good. Standard methods of evaluation are not suitable nor intended to uncover the many translation errors and quality deficiencies that still persist. Furthermore, the quality of standard reference translations is commonly questioned and comparable quality levels have been reached by MT alone in several language pairs. Navigating further research in these high-resource settings is thus difficult. In this article, we propose a methodology for creating more reliable document-level human reference translations, called "optimal reference translations," with the simple aim to raise the bar of what should be deemed "human translation quality." We evaluate the obtained document-level optimal reference translations in comparison with "standard" ones, confirming a significant quality increase and also documenting the relationship between evaluation and translation editing.
@@ -68,22 +56,16 @@ For now cite as:
68
  Collected English to Czech translation evaluation human data are in [`data/ort_human.json`](data/ort_human.json). The rest of this repository contains data preparation and evaluation code.
69
  Our data is based on WMT2020 data and can thus be also used to e.g. evaluate the quality of various translations as references.
70
  The process of the data is as follows:
71
- 1. P1, P2, and P3 are independent translations from English to Czech. N1 is an expert translation by a translatologist.
72
  2. All the human translations are evaluated on document and segment level with detail (in [`data/ort_human.json`](data/ort_human.json)) by different types of human annotators (laypeople, translatology students, professional translators). If the translation is not perfect, the annotators provide a post-edited version for which they would assign the highest grade (6).
73
 
74
  Note: If you you also want to use the WMT2020 system submissions, please contact [Vilém Zouhar](vilem.zouhar@gmail.com). The code is here, just not pretty yet. 🙂
75
 
76
  ## Example usage
77
 
78
- ```bash
79
- # fetch data
80
- curl "https://raw.githubusercontent.com/ufal/optimal-reference-translations/main/data/ort_human.json" > ort_human.json
81
- ```
82
-
83
  ```python3
84
- # in Python
85
- import json
86
- data = json.load(open("ort_human.json"))
87
 
88
  # 220 annotated documents
89
  len(data)
@@ -98,22 +80,13 @@ sum([sum([len(line["translations"]) for line in doc["lines"]]) for doc in data])
98
  len(set(doc["uid"] for doc in data))
99
 
100
  import numpy as np
101
- # Average document-level for N1: 5.865
102
  np.average([doc["rating"]["4"]["overall"] for doc in data])
103
 
104
- # Average document-level for P3: 4.810
105
  np.average([doc["rating"]["3"]["overall"] for doc in data])
106
  ```
107
 
108
- ## Results
109
-
110
- It make sense to have multiple rounds of translation post-editing.
111
- ![image](https://github.com/ufal/optimal-reference-translations/assets/7661193/d20d1e2e-4d08-4457-b654-961917d7b0e9)
112
-
113
- Translatology students, professionals and laypeople perceive quality differently.
114
- ![image](https://github.com/ufal/optimal-reference-translations/assets/7661193/190f519d-6851-4186-aac6-7fe53b59ba7f)
115
-
116
-
117
  ## Data structure
118
 
119
  Beginning of `ort_wmt` (human evaluation of multiple WMT systems):
 
17
  - 1K<n<10K
18
  ---
19
 
20
+ This is the dataset for two papers: **Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]** - effect of reference quality and quantity on automatic metric performance, and **Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)** - creation of the data and human aspects of annotation and translation.
21
+ Please see the [original repository](https://github.com/ufal/optimal-reference-translations) for more information.
22
 
23
  # Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]
24
 
 
36
  }
37
  ```
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  # Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)
40
 
41
  > **Abstract:** The overall translation quality reached by current machine translation (MT) systems for high-resourced language pairs is remarkably good. Standard methods of evaluation are not suitable nor intended to uncover the many translation errors and quality deficiencies that still persist. Furthermore, the quality of standard reference translations is commonly questioned and comparable quality levels have been reached by MT alone in several language pairs. Navigating further research in these high-resource settings is thus difficult. In this article, we propose a methodology for creating more reliable document-level human reference translations, called "optimal reference translations," with the simple aim to raise the bar of what should be deemed "human translation quality." We evaluate the obtained document-level optimal reference translations in comparison with "standard" ones, confirming a significant quality increase and also documenting the relationship between evaluation and translation editing.
 
56
  Collected English to Czech translation evaluation human data are in [`data/ort_human.json`](data/ort_human.json). The rest of this repository contains data preparation and evaluation code.
57
  Our data is based on WMT2020 data and can thus be also used to e.g. evaluate the quality of various translations as references.
58
  The process of the data is as follows:
59
+ 1. R1, R2, and R3 are independent translations from English to Czech. R4 is an expert translation by a translatologist.
60
  2. All the human translations are evaluated on document and segment level with detail (in [`data/ort_human.json`](data/ort_human.json)) by different types of human annotators (laypeople, translatology students, professional translators). If the translation is not perfect, the annotators provide a post-edited version for which they would assign the highest grade (6).
61
 
62
  Note: If you you also want to use the WMT2020 system submissions, please contact [Vilém Zouhar](vilem.zouhar@gmail.com). The code is here, just not pretty yet. 🙂
63
 
64
  ## Example usage
65
 
 
 
 
 
 
66
  ```python3
67
+ from datasets import load_dataset
68
+ data = load_dataset("zouharvi/optimal-reference-translations", 'ort_human')["train"]
 
69
 
70
  # 220 annotated documents
71
  len(data)
 
80
  len(set(doc["uid"] for doc in data))
81
 
82
  import numpy as np
83
+ # Average document-level for R4: 5.865
84
  np.average([doc["rating"]["4"]["overall"] for doc in data])
85
 
86
+ # Average document-level for R3: 4.810
87
  np.average([doc["rating"]["3"]["overall"] for doc in data])
88
  ```
89
 
 
 
 
 
 
 
 
 
 
90
  ## Data structure
91
 
92
  Beginning of `ort_wmt` (human evaluation of multiple WMT systems):