|
# QE4PE Post-task |
|
|
|
The goal of the `posttask` is to evaluate translators' editing speed and quality after they acquired proficiency with the GroTE interface on a fixed set of additional texts from the same distribution of `pretask` and `main`, using the `no_highlight` modality for all translators to estimate individual baseline speed. Refer to the [translators' guidelines](./posttask_guidelines.pdf) for additional details about the task. |
|
|
|
## Folder Structure |
|
|
|
```shell |
|
posttask/ |
|
├── inputs/ |
|
│ ├── eng-ita/ |
|
│ │ ├── posttask_eng-ita_doc1_input.txt |
|
│ │ ├── posttask_eng-ita_doc2_input.txt |
|
│ │ └── ... # GroTE input files with tags and ||| source-target separator |
|
│ └── eng-nld/ |
|
│ │ ├── posttask_eng-nld_doc1_input.txt |
|
│ │ ├── posttask_eng-nld_doc2_input.txt |
|
│ │ └── ... # GroTE input files with tags and ||| source-target separator |
|
├── outputs/ |
|
│ ├── eng-ita/ |
|
│ │ ├── logs/ |
|
│ │ │ ├── posttask_eng-ita_oracle_t1_logs.csv |
|
│ │ │ └── ... # GroTE logs for every translator (e.g. oracle_t1, uses main task IDs) |
|
│ │ ├── metrics/ |
|
│ │ │ ├── posttask_eng-ita_oracle_t1_metrics.csv |
|
│ │ │ └── ... # Metrics for every translator (e.g. oracle_t1, uses main task IDs) |
|
│ │ ├── posttask_eng-ita_doc1_oracle_t1_output.txt |
|
│ │ └── ... # GroTE output files (one edited target per line) |
|
│ └── eng-nld/ |
|
│ ├── logs/ |
|
│ │ ├── posttask_eng-nld_oracle_t1_logs.csv |
|
│ │ └── ... # GroTE logs for every translator (e.g. oracle_t1, uses main task IDs) |
|
│ ├── metrics/ |
|
│ │ ├── posttask_eng-nld_oracle_t1_metrics.csv |
|
│ │ └── ... # Metrics for every translator (e.g. oracle_t1, uses main task IDs) |
|
│ ├── example_eng-nld_doc1_oracle_t1_output.txt |
|
│ └── ... # GroTE output files (one post-edited segment per line) |
|
├── doc_id_map.json # Source and doc name maps |
|
├── posttask_guidelines.pdf # Task guidelines for translators |
|
└── README.md |
|
``` |
|
|
|
## Important Notes about Data Issues |
|
|
|
Translator `unsupervised_t1` for the `eng-ita` direction could not complete the `posttask`, and is hence not present in the data. |
|
|
|
## Inputs |
|
|
|
The `posttask` uses eight documents containing between 3-8 contiguous segments from the same `wmt23` collection of the `main` task that were matching all the `main` task requirements, but were not selected for the main collection. [doc_id_map.json](./doc_id_map.json) shows the document assignments from the original collection. |
|
Documents were edited in the `no_highlight` setting by all translators to provide a baseline for speed/quality measurements in the main task. Input files for the task have names using the format: |
|
|
|
```python |
|
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{DOC_ID}}_input.txt" |
|
``` |
|
|
|
## Outputs |
|
|
|
Files in `outputs/eng-ita` and `outputs/eng-nld` contain post-edited outputs (one per line, matching inputs) using format: |
|
|
|
```python |
|
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{DOC_ID}}_{{TRANSLATOR_ID}}_output.txt" |
|
``` |
|
|
|
> **Note**: `{{TRANSLATOR_ID}}` includes the `{{HIGHLIGHT_MODALITY}}` information matching the ID from the `main` task. Despite this, **all editing for `posttask` was performed in the `no_highlight` setting**. |
|
|
|
The contents of `outputs/{{TRANSLATION_DIRECTION}}/logs` can be used to analyze editing behavior at a granular scale. Each log file has format: |
|
|
|
```python |
|
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{TRANSLATOR_ID}}_logs.csv" |
|
``` |
|
|
|
Refer to [GroTE](https://github.com/gsarti/grote) documentation for more information about the logs. |
|
|
|
The contents of `outputs/{{TRANSLATION_DIRECTION}}/metrics` contain aggregated metrics for each translator, with name format: |
|
|
|
```python |
|
"{{TASK_ID}}_{{TRANSLATION_DIRECTION}}_{{TRANSLATOR_ID}}_metrics.csv" |
|
``` |
|
|
|
Metrics include: |
|
|
|
1. max/min/mean/std of BLEU, chrF, TER and COMET scores computed between the MT or the PE texts and all (other, in case of PE) post-edited variants of the same segment for `{{TRANSLATION_DIRECTION}}`. |
|
|
|
2. XCOMET-XXL segment-level QE scores and errors for the MT and PE texts. |
|
|