{ "results": { "results": { "other-metrics-definitions": "N/A", "has-previous-results": "yes", "current-evaluation": "We evaluated a wide range of models as part of the GEM benchmark.", "previous-results": "Results can be found at https://gem-benchmark.com/results.", "original-evaluation": "For both languages, the participating systems are automatically evaluated in a multi-reference scenario. Each English hypothesis is compared to a maximum of 5 references, and each Russian one to a maximum of 7 references. On average, English data has 2.89 references per test instance, and Russian data has 2.52 references per instance. \n\nIn a human evaluation, example are uniformly sampled across size of triple sets and the following dimensions are assessed (on MTurk and Yandex.Toloka):\n\n1. Data Coverage: Does the text include descriptions of all predicates presented in the data?\n2. Relevance: Does the text describe only such predicates (with related subjects and objects), which are found in the data?\n3. Correctness: When describing predicates which are found in the data, does the text mention correct the objects and adequately introduces the subject for this specific predicate?\n4. Text Structure: Is the text grammatical, well-structured, written in acceptable English language?\n5. Fluency: Is it possible to say that the text progresses naturally, forms a coherent whole and it is easy to understand the text?\n\nFor additional information like the instructions, we refer to the original paper.\n" } }, "considerations": { "pii": { "risks-description": "There is no PII in this dataset." }, "licenses": { "dataset-restrictions-other": "N/A", "data-copyright-other": "N/A", "dataset-restrictions": [ "non-commercial use only" ], "data-copyright": [ "public domain" ] }, "limitations": { "data-technical-limitations": "The quality of the crowdsourced references is limited, in particular in terms of fluency/naturalness of the collected texts.\n\nRussian data was machine-translated and then post-edited by crowdworkers, so some examples may still exhibit issues related to bad translations.\n", "data-unsuited-applications": "Only a limited number of domains are covered in this dataset. As a result, it cannot be used as a general-purpose realizer. ", "data-discouraged-use": "N/A" } }, "context": { "previous": { "is-deployed": "yes - related tasks", "described-risks": "We do not foresee any negative social impact in particular from this dataset or task.\n\nPositive outlooks: Being able to generate good quality text from RDF data would permit, e.g., making this data more accessible to lay users, enriching existing text with information drawn from knowledge bases such as DBpedia or describing, comparing and relating entities present in these knowledge bases.\n", "changes-from-observation": "N/A" }, "underserved": { "helps-underserved": "no", "underserved-description": "N/A" }, "biases": { "has-biases": "yes", "bias-analyses": "This dataset is created using DBpedia RDF triples which naturally exhibit biases that have been found to exist in Wikipedia such as some forms of, e.g., gender bias.\n\nThe choice of [entities](https://gitlab.com/shimorina/webnlg-dataset/-/blob/master/supplementary/entities_dict.json), described by RDF trees, was not controlled. As such, they may contain gender biases; for instance, all the astronauts described by RDF triples are male. Hence, in texts, pronouns _he/him/his_ occur more often. Similarly, entities can be related to the Western culture more often than to other cultures.\n", "speaker-distibution": "In English, the dataset is limited to the language that crowdraters speak. In Russian, the language is heavily biased by the translationese of the translation system that is post-edited. " } } }