Datasets:
metadata
license: cc
configs:
- config_name: default
data_files:
- split: phase_1
path: phase_1.json
- split: phase_2
path: phase_2.json
task_categories:
- translation
language:
- en
- cs
tags:
- post editing
- quality
size_categories:
- 1K<n<10K
Neural Machine Translation Quality and Post-Editing Performance
This is a repository for an experiment relating NMT quality and post-editing efforts, presented at EMNLP2021 (presentation recording). Please cite the following paper when you use this research:
@inproceedings{zouhar2021neural,
title={Neural Machine Translation Quality and Post-Editing Performance},
author={Zouhar, Vil{\'e}m and Popel, Martin and Bojar, Ond{\v{r}}ej and Tamchyna, Ale{\v{s}}},
booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing},
pages={10204--10214},
year={2021},
url={https://aclanthology.org/2021.emnlp-main.801/}
}
You can access the data on huggingface:
from datasets import load_dataset
data_p1 = load_dataset("zouharvi/nmt-pe-effects", "phase_1")
data_p2 = load_dataset("zouharvi/nmt-pe-effects", "phase_2")
The first phase is the main one where we can see the effect of NMT quality on post-editing time. The second phase is to estimate the quality of the first post-editing round.
The code is also public.