zouharvi commited on
Commit
1d779e4
1 Parent(s): 4efbdbe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -22,6 +22,12 @@ size_categories:
22
  This is the dataset for two papers: **Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]** - effect of reference quality and quantity on automatic metric performance, and **Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)** - creation of the data and human aspects of annotation and translation.
23
  Please see the [original repository](https://github.com/ufal/optimal-reference-translations) for more information or [contact the authors](mailto:vilem.zouhar@gmail.com) with any questions.
24
 
 
 
 
 
 
 
25
  # Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]
26
 
27
  > **Abstract:** Automatic machine translation metrics often use _human_ translations to determine the quality _system_ translations. Common wisdom in the field dictates that the human references should be of very high quality. However, there are no cost-benefit analyses that could be used to guide practitioners who plan to collect references for machine translation evaluation. We find that higher-quality references lead to better metric correlations with humans at the segment-level. Having up to 7 references per segment and taking their average helps. Interestingly, the references from vendors of different qualities can be mixed together and improve metric success. Higher quality references, however, cost more to create and we frame this as an optimization problem: given a specific budget, what types of references should be collected to maximize metric success. These findings can be used by evaluators of shared tasks when references need to be created under a certain budget.
@@ -67,7 +73,7 @@ Note: If you you also want to use the WMT2020 system submissions, please contact
67
 
68
  ```python3
69
  from datasets import load_dataset
70
- data = load_dataset("zouharvi/optimal-reference-translations", 'ort_human')["train"]
71
 
72
  # 220 annotated documents
73
  len(data)
 
22
  This is the dataset for two papers: **Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]** - effect of reference quality and quantity on automatic metric performance, and **Evaluating Optimal Reference Translations [[paper]](https://arxiv.org/abs/2311.16787)** - creation of the data and human aspects of annotation and translation.
23
  Please see the [original repository](https://github.com/ufal/optimal-reference-translations) for more information or [contact the authors](mailto:vilem.zouhar@gmail.com) with any questions.
24
 
25
+ You need to stream the dataset for it to work because `ort_human` and `ort_wmt` splits have different columns:
26
+ ```
27
+ data_human = list(load_dataset("zouharvi/optimal-reference-translations", 'ort_human', streaming=True)["train"])
28
+ data_wmt = list(load_dataset("zouharvi/optimal-reference-translations", 'ort_wmt', streaming=True)["train"])
29
+ ```
30
+
31
  # Quality and Quantity of Machine Translation References for Automated Metrics [[paper](https://arxiv.org/abs/2401.01283)]
32
 
33
  > **Abstract:** Automatic machine translation metrics often use _human_ translations to determine the quality _system_ translations. Common wisdom in the field dictates that the human references should be of very high quality. However, there are no cost-benefit analyses that could be used to guide practitioners who plan to collect references for machine translation evaluation. We find that higher-quality references lead to better metric correlations with humans at the segment-level. Having up to 7 references per segment and taking their average helps. Interestingly, the references from vendors of different qualities can be mixed together and improve metric success. Higher quality references, however, cost more to create and we frame this as an optimization problem: given a specific budget, what types of references should be collected to maximize metric success. These findings can be used by evaluators of shared tasks when references need to be created under a certain budget.
 
73
 
74
  ```python3
75
  from datasets import load_dataset
76
+ data = list(load_dataset("zouharvi/optimal-reference-translations", 'ort_human', streaming=True)["train"])
77
 
78
  # 220 annotated documents
79
  len(data)