Datasets:

Modalities:
Text
Formats:
json
Languages:
multilingual
ArXiv:
Libraries:
Datasets
pandas
License:
nikitam commited on
Commit
387f005
1 Parent(s): d1ed478

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ license:
5
+ - cc-by-nc-sa-4.0
6
+ multilinguality:
7
+ - multilingual
8
+ source_datasets:
9
+ - FLORES-101, FLORES-200, PAWS-X, XNLI, XTREME, WinoMT, Wino-X, MuCOW, EuroParl ConDisco, ParcorFull
10
+ task_categories:
11
+ - translation
12
+ pretty_name: ACES
13
+
14
+ ---
15
+
16
+ # Dataset Card for ACES
17
+
18
+ ## Table of Contents
19
+ - [Dataset Description](#dataset-description)
20
+ - [Dataset Summary](#dataset-summary)
21
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
22
+ - [Languages](#languages)
23
+ - [Dataset Structure](#dataset-structure)
24
+ - [Data Instances](#data-instances)
25
+ - [Data Fields](#data-fields)
26
+ - [Data Splits](#data-splits)
27
+ - [Dataset Creation](#dataset-creation)
28
+ - [Curation Rationale](#curation-rationale)
29
+ - [Source Data](#source-data)
30
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
31
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Usage](#usage)
34
+ - [Other Known Limitations](#other-known-limitations)
35
+ - [Additional Information](#additional-information)
36
+ - [Licensing Information](#licensing-information)
37
+ - [Citation Information](#citation-information)
38
+
39
+ ## Dataset Description
40
+
41
+ - **Homepage:**
42
+ - **Repository:** [ACES dataset repository](https://github.com/EdinburghNLP/ACES)
43
+ - **Paper:**
44
+ - **Leaderboard:**
45
+
46
+ ### Dataset Summary
47
+
48
+ ACES consists of 36,476 examples covering 146 language pairs and representing challenges from 68 phenomena for evaluating machine translation metrics. We focus on translation accuracy errors and base the phenomena covered in our challenge set on the Multidimensional Quality Metrics (MQM) ontology. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge.
49
+ ### Supported Tasks and Leaderboards
50
+
51
+ -Machine translation evaluation of metrics
52
+
53
+ -Potentially useful for contrastive machine translation evaluation
54
+
55
+ ### Languages
56
+
57
+ The dataset covers 146 language pairs as follows:
58
+
59
+ af-en, af-fa, ar-en, ar-fr, ar-hi, be-en, bg-en, bg-lt, ca-en, ca-es, cs-en, da-en, de-en, de-es, de-fr, de-ja, de-ko, de-ru, de-zh, el-en, en-af, en-ar, en-be, en-bg, en-ca, en-cs, en-da, en-de, en-el, en-es, en-et, en-fa, en-fi, en-fr, en-gl, en-he, en-hi, en-hr, en-hu, en-hy, en-id, en-it, en-ja, en-ko, en-lt, en-lv, en-mr, en-nl, en-no, en-pl, en-pt, en-ro, en-ru, en-sk, en-sl, en-sr, en-sv, en-ta, en-tr, en-uk, en-ur, en-vi, en-zh, es-ca, es-de, es-en, es-fr, es-ja, es-ko, es-zh, et-en, fa-af, fa-en, fi-en, fr-de, fr-en, fr-es, fr-ja, fr-ko, fr-mr, fr-ru, fr-zh, ga-en, gl-en, he-en, he-sv, hi-ar, hi-en, hr-en, hr-lv, hu-en, hy-en, hy-vi, id-en, it-en, ja-de, ja-en, ja-es, ja-fr, ja-ko, ja-zh, ko-de, ko-en, ko-es, ko-fr, ko-ja, ko-zh, lt-bg, lt-en, lv-en, lv-hr, mr-en, nl-en, no-en, pl-en, pl-mr, pl-sk, pt-en, pt-sr, ro-en, ru-de, ru-en, ru-es, ru-fr, sk-en, sk-pl, sl-en, sr-en, sr-pt, sv-en, sv-he, sw-en, ta-en, th-en, tr-en, uk-en, ur-en, vi-en, vi-hy, wo-en, zh-de, zh-en, zh-es, zh-fr, zh-ja, zh-ko
60
+
61
+ ## Dataset Structure
62
+
63
+ ### Data Instances
64
+
65
+ Each data instance contains the following features: _source_, _good-translation_, _incorrect-translation_, _reference_, _phenomena_, _langpair_
66
+
67
+ See the [ACES corpus viewer](https://huggingface.co/datasets/nikitam/ACES/viewer/nikitam--ACES/train) to explore more examples.
68
+
69
+ An example from the ACES challenge set looks like the following:
70
+ ```
71
+ {'source': "Proper nutritional practices alone cannot generate elite performances, but they can significantly affect athletes' overall wellness.", 'good-translation': 'Las prácticas nutricionales adecuadas por sí solas no pueden generar rendimiento de élite, pero pueden afectar significativamente el bienestar general de los atletas.', 'incorrect-translation': 'Las prácticas nutricionales adecuadas por sí solas no pueden generar rendimiento de élite, pero pueden afectar significativamente el bienestar general de los jóvenes atletas.', 'reference': 'No es posible que las prácticas nutricionales adecuadas, por sí solas, generen un rendimiento de elite, pero puede influir en gran medida el bienestar general de los atletas .', 'phenomena': 'addition', 'langpair': 'en-es'}
72
+
73
+
74
+ ```
75
+
76
+ ### Data Fields
77
+
78
+ - 'source': a string containing the text that needs to be translated
79
+ - 'good-translation': possible translation of the source sentence
80
+ - 'incorrect-translation': translation of the source sentence that contains an error or phenomenon of interest
81
+ - 'reference': the gold standard translation
82
+ - 'phenomena': the type of error or phenomena being studied in the example
83
+ - 'langpair': the source language and the target language pair of the example
84
+
85
+ Note that the _good-translation_ may not be free of errors but it is a better translation than the _incorrect-translation_
86
+
87
+ ### Data Splits
88
+
89
+ The ACES dataset has 1 split: _train_ which contains the challenge set. There are 36476 examples.
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ With the advent of neural networks and especially Transformer-based architectures, machine translation outputs have become more and more fluent. Fluency errors are also judged less severely than accuracy errors by human evaluators \citep{freitag-etal-2021-experts} which reflects the fact that accuracy errors can have dangerous consequences in certain contexts, for example in the medical and legal domains. For these reasons, we decided to build a challenge set focused on accuracy errors.
96
+
97
+ Another aspect we focus on is including a broad range of language pairs in ACES. Whenever possible we create examples for all language pairs covered in a source dataset when we use automatic approaches. For phenomena where we create examples manually, we also aim to cover at least two language pairs per phenomenon but are of course limited to the languages spoken by the authors.
98
+
99
+ We aim to offer a collection of challenge sets covering both easy and hard phenomena. While it may be of interest to the community to continuously test on harder examples to check where machine translation evaluation metrics still break, we believe that easy challenge sets are just as important to ensure that metrics do not suddenly become worse at identifying error types that were previously considered ``solved''. Therefore, we take a holistic view when creating ACES and do not filter out individual examples or exclude challenge sets based on baseline metric performance or other factors.
100
+
101
+
102
+
103
+ ### Source Data
104
+
105
+ #### Initial Data Collection and Normalization
106
+
107
+ Please see Sections 4 and 5 of the paper.
108
+
109
+ #### Who are the source language producers?
110
+
111
+ The dataset contains sentences found in FLORES-101, FLORES-200, PAWS-X, XNLI, XTREME, WinoMT, Wino-X, MuCOW, EuroParl ConDisco, ParcorFull datasets. Please refer to the respective papers for further details.
112
+
113
+ ### Personal and Sensitive Information
114
+
115
+ The external datasets may contain sensitive information. Refer to the respective datasets for further details.
116
+
117
+ ## Considerations for Using the Data
118
+
119
+ ### Usage
120
+
121
+ ACES has been primarily designed to evaluate machine translation metrics on the accuracy errors. We expect the metric to score _good-translation_ consistently higher than _incorrect-translation_. We report the performance of metric based on Kendall-tau like correlation. It measures the number of times a metric scores the good translation above the incorrect translation (concordant) and equal to or lower than the incorrect translation (discordant).
122
+
123
+ ### Discussion of Biases
124
+
125
+ Some examples within the challenge set exhibit biases, however, this is necessary in order to expose the limitations of existing metrics.
126
+
127
+ ### Other Known Limitations
128
+ The ACES challenge set exhibits a number of biases. Firstly, there is greater coverage in terms of phenomena and the number of examples for the en-de and en-fr language pairs. This is in part due to the manual effort required to construct examples for some phenomena, in particular, those belonging to the discourse-level and real-world knowledge categories. Further, our choice of language pairs is also limited to the ones available in XLM-R. Secondly, ACES contains more examples for those phenomena for which examples could be generated automatically, compared to those that required manual construction/filtering. Thirdly, some of the automatically generated examples require external libraries which are only available for a few languages (e.g. Multilingual Wordnet). Fourthly, the focus of the challenge set is on accuracy errors. We leave the development of challenge sets for fluency errors to future work.
129
+
130
+ As a result of using existing datasets as the basis for many of the examples, errors present in these datasets may be propagated through into ACES. Whilst we acknowledge that this is undesirable, in our methods for constructing the incorrect translation we aim to ensure that the quality of the incorrect translation is always worse than the corresponding good translation.
131
+
132
+ The results and analyses presented in the paper exclude those metrics submitted to the WMT 2022 metrics shared task that provides only system-level outputs. We focus on metrics that provide segment-level outputs as this enables us to provide a broad overview of metric performance on different phenomenon categories and to conduct fine-grained analyses of performance on individual phenomena. For some of the fine-grained analyses, we apply additional constraints based on the language pairs covered by the metrics, or whether the metrics take the source as input, to address specific questions of interest. As a result of applying some of these additional constraints, our investigations tend to focus more on high and medium-resource languages than on low-resource languages. We hope to address this shortcoming in future work.
133
+
134
+
135
+ ## Additional Information
136
+
137
+
138
+ ### Licensing Information
139
+
140
+ The ACES dataset is Creative Commons Attribution Non-Commercial Share Alike 4.0 (cc-by-nc-sa-4.0)
141
+
142
+ ### Citation Information
143
+
144
+ Coming soon
145
+
146
+ Dataset card based on [Allociné](https://huggingface.co/datasets/allocine)